scispace - formally typeset
Search or ask a question

Showing papers on "Data management published in 1984"


Book
01 Jan 1984
TL;DR: In this paper, the authors present a management and the environment: managers and the evolution of management managers and their environments managing in a global environment social and ethical responsibilities of management, and work and organizations: management decision-making the planning function strategic planning the organizing function organization design the controlling function.
Abstract: Part 1 Management and the environment: managers and the evolution of management managers and their environments managing in a global environment social and ethical responsibilities of management. Part 2 Managing work and organizations: management decision-making the planning function strategic planning the organizing function organization design the controlling function. Part 3 Managing people in organizations: motivation managing work groups leading people in organizations communication and negotiation human resource management organization change, development and innovation. Part 4 Managing production and operations: production and operations management production and inventory planning and control managing information for decision making. Part 5 Special management topics: entrepreneurship careers in management. Appendix: Internet exercises.

402 citations


01 Nov 1984
TL;DR: In this article, the authors argue that information processing in organizations is influenced by two forces, equivocality and uncertainty, and propose models that link structural characteristics to the level of uncertainty that arise from organizational technology, interdepartmental relationships, and the environment.
Abstract: : This paper argues that information processing in organizations is influenced by two forces--equivocality and uncertainty. Equivocality is reduced through the use of rich media and the enactment of a shared interpretation among managers (Weick, 1979). Uncertainty is reduced by acquiring and processing additional data (Galbraith, 1973; Tushman and Nadler, 1978). Elements of organization structure vary in their capacity to reduce equivocality versus uncertainty. Models are proposed that link structural characteristics to the level of equivocality and uncertainty that arise from organizational technology, interdepartmental relationships, and the environment.

139 citations


Proceedings ArticleDOI
01 Jun 1984
TL;DR: In this paper, the use of commands in a query language as an abstract data type (ADT) in data base management systems is explored, where new data types such as polygons, lines, money, time, arrays of floating point numbers, bit vectors etc.
Abstract: This paper explores the use of commands in a query language as an abstract data type (ADT) in data base management systems Basically, an ADT facility allows new data types, such as polygons, lines, money, time, arrays of floating point numbers, bit vectors, etc, to supplement the built-in data types in a data base system. In this paper we demonstrate the power of adding a data type corresponding to commands in a query language We also propose three extensions to the query language QUEL to enhance its power in this augmented environment.

72 citations



Proceedings Article
27 Aug 1984
TL;DR: The purpose of this paper is to identify the special database management needs of scientific databases, and to point out directions for further research specifically oriented to these needs.
Abstract: The purpose of this paper is to examine the kinds of data and usage of scientific databases and to identify common characteristics among the different disciplines Most scientific databases do not use general purpose database management systems (DBMSs) The main reason is that they have data structures and usage patterns that cannot be easily accommodated by existing DBMSs It is the purpose of this paper to identify the special database management needs of scientific databases, and to point out directions for further research specifically oriented to these needs In the past, we have studied ''statistical databases'', which are databases that are primarily collected for statistical analysis purposes We have found that the observations and techniques developed for statistical databases are useful for scientific databases The reason that common characteristics exist is that many scientific databases are often subject to statistical analysis However, we found that scientific databases have additional stages of data collection and analysis that induce more complexity We discuss the different types of scientific databases, and list the properties identified for them Ten examples are then analyzed with respect to the types of data and their properties, and summarized in two tables Conclusions are drawn as to themore » preferable data management methods needed in support of scientific databases« less

58 citations


Proceedings ArticleDOI
24 Apr 1984
TL;DR: The Mermaid testbed provides a uniform front end that makes the complexity of manipulating data in distributed heterogeneous databases under various data management systems transparent to the user.
Abstract: The Mermaid testbed system has been developed as part of an ongoing research program at SDC to explore issues in distributed data management. The Mermaid testbed provides a uniform front end that makes the complexity of manipulating data in distributed heterogeneous databases under various data management systems (DBMSs) transparent to the user. It is being used to test query optimization algorithms as well as user interface methodology.

34 citations


Book
01 Nov 1984
TL;DR: In this paper, the authors introduce management concepts in a clear and stimulating manner to students of diverse backgrounds, working from the premise that productivity is one of the keys to a successful organization and that good management will promote this productivity, they focus on management and the wide range of factors which effect managers today.
Abstract: This book which is divided into seven parts attempts to introduce management concepts in a clear and stimulating manner to students of diverse backgrounds. Working from the premise that productivity is one of the keys to a successful organization and that good management will promote this productivity, it focuses on management and the wide range of factors which effect managers today. The seven parts cover: the nature of management; understanding and managing individual behavior; managing groups and influence in organizations; organizations; management processes; managing for productivity improvement; and preparing for the future.

29 citations


Journal ArticleDOI
TL;DR: The components of a data management system are presented and procedures for use in each component of the system are described to obtain high quality data.
Abstract: An effective data management system ensures high quality research data by making certain of the proper execution of the study design. This paper presents the components of a data management system and describes procedures for use in each component of the system to obtain high quality data. We discuss the interrelationship among the components of the data management system and the relationship of the data management system to other parts of the research project. We identify underlying principles in design and implementation of a data management system to ensure high quality data.

27 citations


Journal ArticleDOI
Neches1

26 citations


Journal ArticleDOI
TL;DR: The RESEDA project is concerned with the construction of Artificial Intelligence management systems working on factual databases consisting of biographical data, and this data is described using a particular Knowledge Representation language based on the Artificial Intelligence understanding of a “Case Grammar” approach.
Abstract: The RESEDA project is concerned with the construction of Artificial Intelligence (AI) management systems working on factual databases consisting of biographical data; this data is described using a particular Knowledge Representation language (“meta-language”) based on the Artificial Intelligence understanding of a “Case Grammar” approach. The “computing kemel” of the system consists of an inference interpreter. Where it is not possible to find a direct response to the (formal) question posed, RESEDA tries to answer indirectly by using a first stage of inference procedures (“transformations”). Moreover, the system is able to establish automatically new causal links between the statements represented in the base, on the ground of “hypotheses”, of a somewhat general nature, about the class of possible relationships. In this case, the result of the inference operations can thus modify, at least in principle, the original content of the database.

24 citations


Journal ArticleDOI
TL;DR: Proposition d'une methode operationnelle pour l'evaluation du cout pour l’accession aux tuples d’une relation via un index through un index.

Book Chapter
01 Jan 1984

Journal ArticleDOI
TL;DR: The purpose of this paper is to present the CAL/SAP development system and to illustrate that computer independent programs in structural engineering and computational mechanics can be developed which operate on both large and small computers.


Proceedings Article
27 Aug 1984
TL;DR: The design of the Personal Data Manager is discussed, including the conceptual information model, the user interface, and a prototype implementation, which attempts to make a personal computer serve as an extension of its user's memory.
Abstract: 1. Introduction The Personal Data Manager (PDM) is a simple database system for personal computers. PDM is intended to provide personal information management capabilities for the large class of personal computer users who are not computer experts, and who have no programming experience. PDM simply attempts to make a personal computer serve as an extension of its user's memory. PDM is based on a simple conceptual database model that includes high-level semantic modeling constructs, such as objects, object kinds (types), attributes, and object frames. A prescriptive user interface allows the contents of the database and the structure on the information in the database to be changed dynamically. A working kind is a run-time collection of database objects defined via the user-interface; working kinds can be interactively restricted and expanded and made part of the permanent database. This paper discusses the design of the Personal Data Manager, including the conceptual information model, the user interface, and a prototype implementation. The past several years have seen a dramatic proliferation of personal computers, and anticipated future technological advances will continue to increase their power and reduce their cost. These flexible tools are improving the a,ccessibility of computing resources for end-users who are not computer experts, and who may have little or no programming experience. While personal computers support a wide variety of specific applications, perhaps one of the most exciting and far-reaching potential uses of a personal computer is a general-purpose information manager and a tool for information sharing/communication. The class of potential end-users of a personal information manager is mostly dominated by the large number of 'home computer " users who are by no means computer experts, and who have little or no programming experience. Typical end-users may use an information system to manage personal data from a wide variety of applications, such as a phone directory, a calendar, an entertainment guide, a recipe file, a wine list, an index of vacation slides, etc. Novice end-users, however, may successfully utilize an information system only if it provides a simple and easily understandable interface that supports database communication and interaction in a straightforward manner. Potential end-users of a personal data manager also include professionals and engineers who have needs for capabilities to manage office information and design data. Permission to copy without fee all or part of this material Lc gmnted provided that the copies are not made or distributed for direct commercial …


01 Sep 1984
TL;DR: The requirements for a database management system that would satisfy the scientific needs of the Scientific Database Project are discussed, based on a system developed by Deutsch.
Abstract: This document discusses the requirements for a database management system that would satisfy the scientific needs of the Scientific Database Project. We give the major requirements of scientific data management, based on a system developed by Deutsch. Actual requirements, for each category, are identified as mandatory, important, and optional. Mandatory - we should not consider a DBMS unless it satisfies all mandatory requirements. Important - these requirements, while not as crucial as the mandatory ones, are important to the easy and convenient implementation and operation of a scientific database. Optional - such features are nice extras. We expect that the scientific database project can be implemented and operated in any DBMS that meets all of the mandatory and most of the important requirements.

Journal ArticleDOI
TL;DR: This paper proposes the introduction of the concept of “consistency-preserving containers” into data management in order to improve data management support for database management and introduces a locking scheme allowing high concurrency of transactions while guaranteeing consistency.
Abstract: Database management systems today face a rising demand for higher transaction rates and shorter response time. One of the essential requirements for meeting this demand is appropriate operating system support for database management functions. This paper proposes the introduction of the concept of “consistency-preserving containers” into data management in order to improve data management support for database management. Moreover, it presents the formal definition of a transactionoriented interface between data management and database management based on this concept, and introduces a locking scheme allowing high concurrency of transactions while guaranteeing consistency. Logging and recovery aspects of the concept are discussed.



Proceedings ArticleDOI
Hans Helmut Diel1, Gerald Kreissig1, Norbert Lenz1, Michael Scheible1, Bernd Schoener1 
01 Jun 1984
TL;DR: The paper describes the part of a general operating system Kernel supporting data management functions that provide a powerful basis for the implementation of different kinds of access methods and file systems, including data base systems.
Abstract: The paper describes the part of a general operating system Kernel supporting data management functions The operating system Kernel can be imbedded into microcode and viewed as an extended hardware interfaceFour Kernel instructions are defined to support data management They provide a powerful basis for the implementation of different kinds of access methods and file systems, including data base systems Advanced transaction processing concepts such as concurrency control, support of backout, commit and a variety of share options are included.

Journal ArticleDOI
TL;DR: In this paper, the conceptual basis of integrated management is explored from a systemic viewpoint, and both an organizational component and a program component that are distinct yet interrelated are explored from this perspective.
Abstract: Effective project management requires that all planning and management control activities be fully integrated. Planning must encompass operational, tactical, and strategic considerations; functionally oriented efforts must be properly blended into a unified whole; and project technical performance, cost, and schedule parameters must be integrated into a systemic composite. Viewed from this perspective, integrated management has both an organizational component and a program component that are distinct yet interrelated. The conceptual basis of these two components of integrated management are explored from a systemic viewpoint.


Proceedings Article
01 Jan 1984
TL;DR: The main results of recent research on temporally sensitive data models are summarized, the lessons learned in their development are discussed, and the prospects and dimculties involved in incorporating a temporal dimension into database management systems (TODBs) are assessed.
Abstract: Attentiontothetemporalaspectsof datamanagementhasintensifiedinrecentyears,focusing on data models and related systems that are sensitive to the ubiquitous temporal aspects of data. Both the growing need for easier access to historical data, as well as the imminent availability of mass storage devices, are makingthis apromisingbranchof database research, both practically and theoretically. In this paper we summarize the main results of recent research on temporally sensitive data models, discuss the lessons learned in their development, and assess the prospects and dimculties involved in incorporating a temporal dimension into database management systems (TODBs). Inparticular, three system levels are identified: the external userview of the database; an intermediate view closer to the structure of an existing data model; and an internal or implementation view defined interms of low level data structures. This general architecture coherently incorporates a variety of related research results and development experiences, and serves as the framework for theoretical and implementation research into such systems Introduction The underlying pmmise ofthis expandingbodyof research is the recognition that time is not merely another dimension, or another data item tagged along with each tuple, Itseemsnotonlynaturalbutevensomewhattardythatin but rather a more fundamental organizing aspect that our never-ending quest to capture more semantics in humanuserstreatinvery special ways Theresultsofthis formalinformationsystems,we arebeginningtoaugment researchreinforcetheperceptionthatdesigningtemporal our conceptual models with a temporal dimension. Infeatures into information systems requires new and difdeed, there is growing research interest in the nature of ferent conceptual tools time in computer-based information systems and the handling of temporal aspects of data Roughly 50 referA recent panel broughttogether many researchers in the ences to the subject were identified and annotated by field to discuss their work and identify promising research Bolour (1982), addressing four major topical areas: areas (Ariav, 1983 (a)). At the panel, four areas of research were indentified, and in this paper we focus on two of 1. Conceptual data modeling-an extension to these issues namely the implementation of temporal the relational model to incorporate a built-in DBMS and the data models underlying them. semantics for time (Clifford, 1983 (a)). 2. Design and implementation of historical dataInmost existing information systems, aspects of the data bases-the organization of write-once, histhat refer to time are usually either neglected treated torical databases (Aliav, 1981), and implemenonly implicitly, or explicitly factored out ('Ihichritzis, tation of temporally oriented medical databases 1982). None of thethree majordatamodels incorporates (Wiederhold, 1975). a temporal dimension; users of systems based on these models who need temporal information must resort to 3. 'Dynamic databases'-the modeling of patchworksolutionstocircumventthelimitationsoftheir transition rules and temporal inferences from systems. Furthermore, most information systems these rules (May, 1981). typically differentiate between presentand past-related questions in terms of data accessibility (e.g., online and 4. AI related research-the temporal underoffline storage, current database and log tapes.) It is standing of time-oriented data (Kahn, 1975). important to note that this situation prevails not because

01 Mar 1984
TL;DR: This paper focuses on ZOG as a database management system using a set of common database problems as a framework and the ZOG approach to database management is discussed and compared with conventional approaches.
Abstract: : ZOG is a general-purpose human-computer interface system that combines the features of a database system, a word processing system, and an operating system shell. The primary features of ZOG are: an emphasis on menu-selection as the primary interface mode; the use of the selection process for navigation in the database, editing the content and structure of the database, and interaction with programs; an architecture that supporst the implementation and growth of very large, distributed databases; and rapid system response. A distributed MIS, based on the ZOG concept, was developed by Carnegie-Mellon University for the USS CARL VINSON, a nuclear-powered aircraft carrier, in cooperation with the ship's crew. This system is a distributed database system implemented on a network of high-powered personal computers (PERQs). This paper focuses on ZOG as a database management system. Using a set of common database problems as a framework, the ZOG approach to database management is discussed and compared with conventional approaches.

01 Jan 1984
TL;DR: It is shown that this and many other tasks of the protection engineer can be relieved by the computer-aided design (CAD) concept and complete and precise primary/backup coordination criteria are identified for directional relays on transmission networks.
Abstract: The most tedious and time-consuming task for protection engineers is selecting and setting protective relays. It is shown that this and many other tasks of the protection engineer can be relieved by the computer-aided design (CAD) concept. The requirements of such a CAD system are specified. In the process, complete and precise primary/backup coordination criteria are identified for directional relays on transmission networks. Criteria are included for both overcurrent and 3-zone distance relays and for two and three terminal lines. A successful CAD system depends upon a suitable man-machine dialog, effective algorithms and convenient data management. We have begun the appropriate man-machine dialog structuring for the protection CAD system. Algorithms necessary for computer aided relay coordination have been developed. These algorithms consist of one for network analysis and one each for overcurrent and distance relay coordination. These algorithms will successfully coordinate the relays on a test system consisting of a portion of the Puget Sound Power and Light Company's 115-kV system. The results are as good or better than manual coordination by experienced engineers. It is proposed that modern data base methods be used for data management. The experimental code has been implemented using relational data base software. A relationalmore » data base design which includes all types of protection data expected to be necessary for the CAD system is provided.« less


01 Jan 1984
TL;DR: This dissertation focuses on the identification of a useful and feasible design of a data management system that captures and preserves the inherent dynamics of its content, and explicitly deals with time as it stores, retrieves, and presents data.
Abstract: Time is a universal and pervasive aspect of human activities, yet it is rarely reflected in the ways computer-based information systems are constructed. This dissertation thus focuses on the identification of a useful and feasible design of a data management system that captures and preserves the inherent dynamics of its content, and explicitly deals with time as it stores, retrieves, and presents data. The major results of this study are the identification of functional requirements for temporally oriented information systems, the formulation of a data model that captures subtle aspects of the time dimension, the formulation of end-user's syntax for querying a database based on such data model, and the design of an integrated user interface that graphically conveys temporal properties of data retrieved from the associated database. The initial part of the dissertation explicates the guidelines along which the temporally oriented data model and a corresponding data management system were developed. These quidelines are based on a comprehensive survey of time notions in conceptual data models, human perception of time, structures of memory and managerial planning and control, and informational propagation delays. The resulting data model introduces the cube, a data construct which operationalizes the pervasive spatial metaphor for time: time augments the traditional dimensions (i.e., objects and attributes) of the relational data construct. The cube provides the framework for storage and basic manipulations of data within its temporal context. This data model underlies the Temporally Oriented Data Management System (TODMS), which integrates a historical DBMS with a compatible user interface. The implementation design of the DBMS includes the specification of a temporally enhanced SQL query syntax, and a graphic interface that automatically generates animated pictorial representations for the queries' output. This integrated design provides a tangible means for experiencing the temporal dimension of data: users can thus retrieve a data cube from the database and then spatially browse through it.

Journal ArticleDOI
TL;DR: The architecture of data base management systems and the manner in which they work are described, with particular attention to data independence.

01 Jan 1984
TL;DR: The following potential applications of AI to the study of earth science are described: intelligent data management systems; intelligent processing and understanding of spatial data; and automated systems which perform tasks that currently require large amounts of time by scientists and engineers to complete.
Abstract: The following potential applications of AI to the study of earth science are described: (1) intelligent data management systems; (2) intelligent processing and understanding of spatial data; and (3) automated systems which perform tasks that currently require large amounts of time by scientists and engineers to complete. An example is provided of how an intelligent information system might operate to support an earth science project.