scispace - formally typeset
Search or ask a question

Showing papers on "Data management published in 1988"


Book
01 Jun 1988
TL;DR: In this paper, the authors present an Action Science Paradigm for Management Access in Management Access through Different Roles Preunderstanding and Understanding Case Study Research Quality of Academic Research and Management Consultancy.
Abstract: Qualitative Research in Management Access Through Different Roles Preunderstanding and Understanding Case Study Research Quality of Academic Research and Management Consultancy An Action Science Paradigm

2,868 citations


BookDOI
02 Jul 1988
TL;DR: This book provides documentation for a new version of the S system released in 1988, which enhances the features that have made S popular: interactive computing, flexible graphics, data management and a large collection of functions.
Abstract: This book provides documentation for a new version of the S system released in 1988. The New S Language enhances the features that have made S popular: interactive computing, flexible graphics, data management and a large collection of functions. The New S language features make possible new applications and higher-level programming, including a single unified language, user-defined functions as first-class objects, symbolic computations, more accurate numerical calculations and a new approach to graphics. S now provides direct interfaces to the powerful tool of the UNIX operating system and to algorithms implemented in Fortran and C.

715 citations


Journal ArticleDOI
01 Mar 1988
TL;DR: The HiPAC (High Performance ACtive database system) project addresses two critical problems in time-constrained data management: the handling of timing constraints in databases, and the avoidance of wasteful polling through the use of situation-action rules that are an integral part of the database and are monitored by DBMS's condition monitor.
Abstract: The HiPAC (High Performance ACtive database system) project addresses two critical problems in time-constrained data management: the handling of timing constraints in databases, and the avoidance of wasteful polling through the use of situation-action rules that are an integral part of the database and are monitored by DBMS's condition monitor. A rich knowledge model provides the necessary primitives for definition of timing constraints, situation-action rules, and precipitating events. The execution model allows various coupling modes between transactions, situation evaluations and actions, and provides the framework for correct concurrent execution of transactions and triggered actions. Different approaches to scheduling of time-constrained tasks and transactions are explored and an architecture is being designed with special emphasis on the interaction of the time-constrained, active DBMS and the operating system. Performance models are developed to evaluate the various design alternatives.

489 citations


Journal ArticleDOI
TL;DR: There is no single, dominant approach to improving the management of data, and firms have adopted multiple approaches that appear to be very diverse in business objective, organizational scope, planning method, and "product," i.e., deliverable produced.
Abstract: Today, corporations are placing increasing emphasis on the management of data. TO learn more about effective approaches to "managing the data resource," case studies of 31 data management efforts in 20 diverse firms have been conducted. The major finding is that there is no single, dominant approach to improving the management of data. Rather, firms have adopted multiple approaches that appear to be very diverse in (1) business objective, (2) organizational scope, (3) planning method, and (4) "product," i.e., deliverable produced. The dominant business objective for successful action is improved managerial information; most data management efforts are "targeted" without a formal data planning process; and the dominant product wwas "information databases" In addition, several key organizational issues must be addressed when undertaking any data management effort.

176 citations


Journal ArticleDOI
TL;DR: The Cactis project is an on-going effort oriented toward extending database support from traditional business-oriented applications to software environments, and its integration of the type constructors of semantic models and the localized behavior capabilities of object-oriented database management systems.
Abstract: The Cactis project is an on-going effort oriented toward extending database support from traditional business-oriented applications to software environments. The main goals of the project are to construct an appropriate model, and develop new techniques to support the unusual data management needs of software environments, including program compilations, software configurations, load modules, project schedules, software versions, nested and long transactions, and program transformations. The ability to manage derived information is common to many of these data needs, and the Cactis database management system has the ability to represent and maintain derived data in a time- and space-efficient fashion. A central contribution of Cactis is its integration of the type constructors of semantic models and the localized behavior capabilities of object-oriented database management systems. >

76 citations



Journal ArticleDOI
01 Sep 1988
TL;DR: This paper explores the application of knowledge-base and database systems, and fuzzy sets in construction risk management, and an integrated knowledge system is presented.
Abstract: Today's construction industry involves more dynamic and uncertain planning than ever before. To approach complex problems in construction management, decision-makers should follow a systematic and professional approach in risk management. This paper explores the application of knowledge-base and database systems, and fuzzy sets in construction risk management. An integrated knowledge system is presented.

44 citations


Journal ArticleDOI
01 Nov 1988
TL;DR: Different synchronization methods for replicated data in distributed database systems are classified by underlying mechanisms and the type of information they use in ordering the operations of transactions, and some of the replication management methods appeared in the literature are surveyed.
Abstract: Replication is the key factor in improving the availability of data in distributed systems. Replicated data is stored at multiple sites so that it can be accessed by the user even when some of the copies are not available due to site failures. A major restriction to using replication is that replicated copies must behave like a single copy, i.e., mutual consistency as well as internal consistency must be preserved. Synchronization techniques for replicated data in distributed database systems have been studied in order to increase the degree of concurrency and to reduce the possibility of transaction rollback. In this paper, we classify different synchronization methods by underlying mechanisms and the type of information they use in ordering the operations of transactions, and survey some of the replication management methods appeared in the literature.

43 citations


01 Jan 1988
TL;DR: The architecture and operation of the MICON system is described, an integrated collection of programs which automatically synthesizes small computer systems from high level specifications, providing a rapid prototyping capability.
Abstract: The MICON system is an integrated collection of programs which automatically synthesizes small computer systems from high level specifications. The system address multiple levels of design, from logical through physical, providing a rapid prototyping capability. Two programs form MICON's nucleus: a knowledge-based synthesis tool called M1; and, an automated knowledge acquisition tool named CGEN which is used to teach M1 how to design. Other tools in the MICON system are an integrated database and associated data management tools. The system is fully functional, having been used to generate working designs. This paper describes the architecture and operation of the MICON system

37 citations


Proceedings ArticleDOI
01 Jun 1988
TL;DR: A Data Management System (DMS) for VLSI design is presented that supports hierarchical decomposition, multiple levels of abstraction, concurrency control and design evolution.
Abstract: A Data Management System (DMS) for VLSI design is presented that supports hierarchical decomposition, multiple levels of abstraction, concurrency control and design evolution. Our contribution is original in that we employ semantic data modeling techniques to derive a simple, yet powerful, data schema that represents the logical organization of VLSI design data. The resulting DMS provides an open framework for the integration of design tools and relieves the designer of the burden of organizing his design data.

36 citations


01 Jan 1988
TL;DR: Moira, the Athena Service Management System provides centralized control of data administration, a protocol for interface to the database, tools for accessing and modifying thedatabase, and an automated mechanism for data distribution.
Abstract: Maintaining, managing, and supporting an unbounded number of distributed network services on multiple server instances requires new solutions. Moira, the Athena Service Management System provides centralized control of data administration, a protocol for interface to the database, tools for accessing and modifying the database, and an automated mechanism for data distribution.

Journal ArticleDOI
TL;DR: The strategy for implementing a patient data management system at Cedars-Sinai Medical Center is described and the steps for planning, training, configuration and problem resolution are detailed.
Abstract: Implementation of an ICU data management system requires consideration of many factors, including site preparation for equipment, system backup, ICU & medical staff training and evaluation. This paper describes the strategy for implementing a patient data management system at Cedars-Sinai Medical Center and details our steps for planning, training, configuration and problem resolution.


Journal ArticleDOI
TL;DR: There is a need for standardization of risk factor questionnaires for epidemiological research and the development and implementation of a comprehensive cancer risk evaluation program at the University of Texas M.D. Anderson Cancer Center is described.

Journal ArticleDOI
TL;DR: In this paper the requirements of CACSD are formulated from the point of view of Database Management Systems and it is concluded that there has been considerable movement towards the realisation of software tools for CAC SD, but that this owes more to modern ideas about programming languages, than to DBMS developments.

01 Jan 1988
TL;DR: It is shown that managing data using relational database management software is a more efficient way of preparing longitudinal analysis files than traditional methods that use statistical packages such as OSIRIS SAS or SPSSX.
Abstract: The Survey of Income and Program Participation (SIPP) reflects the growing complexity and size of social and economic microdata files designed to examine a broad range of related policy issues. This paper shows that managing data using relational database management software is a more efficient way of preparing longitudinal analysis files than traditional methods that use statistical packages such as OSIRIS SAS or SPSSX. The authors illustrate how social scientists and policy analysts can achieve large gains in productivity and reduce the large overhead that results from everyone carrying out the same data management operations to create longitudinal analysis files. The facility called SIPP ACCESS located at the University of Wisconsin-Madison maintains the 1984 SIPP in a relational data management system and stores the complete 9 waves of core and topical module data on optical laser disks. Part 1 of this paper identifies the weaknesses of traditional database management strategies for constructing longitudinal analysis files from SIPP. Part 2 illustrates how the relational database management system was used to construct longitudinal analysis files from the 1984 SIPP. Sections A-C replicate the Servais examples. Section D makes use of the longitudinal files that SIPP ACCESS has created to show that only a few lines of English language-like codes are required for constructing a longitudinal analysis file of all persons who ever received welfare and were present in the panel for at least the 1st 8 interviews. Part 3 summarizes the differences between traditional data management using statistical packages and the relational database management system. It concludes with a discussion of how a central data sharing facility like SIPP ACCESS provides the impetus for making major investments in devising efficient cost-effective solutions for improving access to complex data files.


Book ChapterDOI
01 Sep 1988
TL;DR: This paper describes an approach to the integration of heterogeneous data management applications based on techniques derived from the fields of Object Orient Databases and Object Oriented Programming Languages.
Abstract: This paper describes an approach to the integration of heterogeneous data management applications. The proposed approach is based on techniques derived from the fields of Object Oriented Databases and Object Oriented Programming Languages. The architecture of a system which implements this approach is also briefly described.

Book ChapterDOI
01 Jan 1988
TL;DR: The basic message of the paper is that the Holy Grail of scalable parallelism, even in the limited application domain of data management is still elusive; but the authors've made several significant steps towards attaining it.
Abstract: Scalable parallel computers have been the Holy Grail of much of Computer Sciences research in the past two decades. There are now several products on the market, ranging from dozen processor bus-based systems to the multiple thousand processing element Connection Machine. These products, including data management systems, use parallelism to satisfy a range of system goals including performance, availability and cost. In this paper, I discuss parallelism issues in the context of data management. My focus is the Shared Nothing class of parallel system, examples of which include products from Tandem and Teradata and experimental systems such as the University of Wisconsin's Gamma and MCC's Bubba. I outline a number of key research areas emphasizing the inherent problems and current state of the art. Next, I summarize recent performance results. The basic message of the paper is that the Holy Grail of scalable parallelism, even in the limited application domain of data management is still elusive; but we've made several significant steps towards attaining it.

Proceedings Article
21 Aug 1988
TL;DR: A multi-tasking architecture for performing real-time monitoring and analysis using knowledge-based problem solving techniques that has been interfaced to an actual spacecraft and is able to process the incoming telemetry in "real-time" (i.e., several hundred data changes per second).
Abstract: This paper describes a multi-tasking architecture for performing real-time monitoring and analysis using knowledge-based problem solving techniques. To handle asynchronous inputs and perform in real-time, the system consists of three or more distributed processes which run concurrently and communicate via a message passing scheme. The Data Management Process acquires, compresses, and routes the incoming sensor data to other processes. The Inference Process consists of a high performance inference engine that performs a real-time analysis on the state and health of the physical system. The I/O Process receives sensor data from the Data Management Process and status messages and recommendations from the Inference Process, updates its graphical displays in real time, and acts as the interface to the console operator. The distributed architecture has been interfaced to an actual spacecraft (NASA's Hubble Space Telescope) and is able to process the incoming telemetry in "real-time" (i.e., several hundred data changes per second).


01 Jan 1988
TL;DR: Analysis conclusions and recommendations are presented in 5 major subject areas involving social development planning and PHC: management information for district health systems based on PHC; cost analysis in PHC MIS; microcomputers and alternative data management techniques; and microcom computers inPHC planning and management decision modeling.
Abstract: Management information systems (MIS) in primary health care (PHC) and the use of microcomputer technology are examined generally and in specific developing countries including Thailand Uganda Nepal Haiti Bangladesh Kenya and Pakistan. In addition analysis conclusions and recommendations are presented in 5 major subject areas involving social development planning and PHC: community-based PHC MIS; management information for district health systems based on PHC; cost analysis in PHC MIS; microcomputers and alternative data management techniques; and microcomputers in PHC planning and management decision modeling. The reports were presented at an international workshop held in Lisbon Portugal in November 1987.

Journal ArticleDOI
01 Jul 1988-Robotica
TL;DR: In this article, the major tasks to be solved when designing tool management systems for flexible manufacturing systems are summarized, as well as a solution for describing the data structure of a tool data base integrated with a generic tool description method, and shows a sample transaction of the way the FMS real-time control system can access and use this data base.
Abstract: Considering the fact that Flexible Manufacturing Systems (FMS) should be able to accommodate a variety of different parts in random order, tool management at cell level and tool transportation, tool data management, tooling data collection, tool maintenance, and manual and/or robotized tool assembly at FMS system level are very important. Tooling information in FMS is used by several subsystems, including: production planning, process control, dynamic scheduling, part programming, tool preset and maintenance, robotized and/or manual tool assembly, stock control and materials storage.The paper summarizes the major tasks to be solved when designing tool management systems for FMS, as well as gives a solution for describing the data structure of a tool data base integrated with a generic tool description method, and shows a sample transaction of the way the FMS real-time control system can access and use this data base.

Proceedings ArticleDOI
23 May 1988
TL;DR: A prototype implementation of a data management system primarily designed for technical applications is discussed, to enrich the concept of consistency supported in a classical database system by a more flexible data distribution mechanism.
Abstract: A prototype implementation of a data management system primarily designed for technical applications is discussed. The system may be regarded as a superset of well-known data administration concepts like file systems and database systems, both centralized and distributed. The basic idea is to enrich the concept of consistency supported in a classical database system by a more flexible data distribution mechanism. The prototype makes it possible to validate the basic concepts it is then extended to a more powerful version. >

Journal ArticleDOI
01 Jun 1988
TL;DR: The specialized data management system described in this paper was motivated by the need for much more efficient data management than a standard database management system could provide for particle physics codes in shared memory multiprocessor environments.
Abstract: The specialized data management system described in this paper was motivated by the need for much more efficient data management than a standard database management system could provide for particle physics codes in shared memory multiprocessor environments. The special characteristics of data and access patterns in particle physics codes need to be fully exploited in order to effect efficient data management. The data management system allows parameteric user control over system features not usually available to them, especially details of physical design and retrieval such as horizontal clustering, asynchronous I/O, and automatic distribution across processors. In the past, each physics code has constructed the equivalent of a primitive data management system from scratch. The system described in this paper is a generic system that can now be interfaced with a variety of physics codes.

01 Mar 1988
TL;DR: A data base which contains descriptions of navigation lock studies conducted by the Corps of Engineers during 1937-1984 is described, which identifies measured quantities, lock features, and a complete bibliography of reports.
Abstract: : This report describes a data base which contains descriptions of navigation lock studies conducted by the Corps of Engineers during 1937-1984 The data base identifies measured quantities, lock features, and a complete bibliography of reports Data base maintenance and management are automated by means of computer software


Proceedings ArticleDOI
01 Jun 1988
TL;DR: A special-purpose database management system for VLSI design environment is presented that could simplify the task and reduce errors made in implementing an integrated V LSI design system.
Abstract: A special-purpose database management system for VLSI design environment is presented. Besides supporting design data management and tools integration, the system provides lots of facilities for supporting fast development of efficient and powerful VLSI CAD tools. This system could simplify the task and reduce errors made in implementing an integrated VLSI design system. >

Proceedings ArticleDOI
29 Nov 1988
TL;DR: The authors describe the software implementation of the satellite management function in the DoD Navstar GPS (Global Positioning System) receivers and illustrate the solutions that were designed into the Navstar user equipment software to deal with the obstacles presented in creating the efficient, effective, and timely data management capability required by a real-time navigation system.
Abstract: The authors describe the software implementation of the satellite management function in the DoD Navstar GPS (Global Positioning System) receivers. The software implementation addresses three functional requirements: database management of satellite almanac, ephemeris, and deterministic corrections data; computation of precise satellite position and velocity for use by navigation software; and using satellite and receiver position data to periodically calculate the constellation of four satellites with optimum geometry for navigation. The authors present an implementation description, detailing how these functions are accomplished, with particular emphasis on the interfaces with the receiver satellite tracking functions necessary to acquire the 50 Hz downlink data. They also illustrate the solutions that were designed into the Navstar user equipment software to deal with the obstacles presented in creating the efficient, effective, and timely data management capability required by a real-time navigation system. >

Book ChapterDOI
Peter Dadam1, Klaus Küspert1, N. Südkamp1, R. Erbe1, Volker Linnemann1, Peter Pistor1, Georg Walch1 
01 Jan 1988
TL;DR: The AIM-P system is described in some detail, its data model and database language are described, and the aspects of extensibility by user defined data types and functions are emphasized.
Abstract: R2D2 — A Relational Robotics Database System with Extensible Data Types — is a joint project of the IBM Heidelberg Scientific Center and the University of Karlsruhe, Computer Science Department. It aims at the design and implementation of a data management system to support engineering (esp. robotics) applications. The management of complex objects in R2D2 is supported by the underlying database system AIM-P. In this paper we describe the AIM-P system in some detail, its data model and database language, and we emphasize the aspects of extensibility by user defined data types and functions.