scispace - formally typeset
Search or ask a question

Showing papers on "Data management published in 1987"



Proceedings ArticleDOI
01 Dec 1987
TL;DR: VBASE, an object-oriented development environment that combines a procedural object language and persistent objects into one integrated system, is presented and how it combines language and database functionality is described.
Abstract: Object-oriented languages generally lack support for persistent objects—that is objects that survive the process or programming session. On the other hand, database systems lack the expressibility of object-oriented languages. Both persistence and expressibility are necessary for production application development.This paper presents a brief overview of VBASE, an object-oriented development environment that combines a procedural object language and persistent objects into one integrated system. Language aspects of VBASE include strong datatyping, a block structured schema definition language, and parameterization, or the ability to type members of aggregate objects. Database aspects include system support for one-to-one, one-to-many, and many-to-many relationships between objects, an inverse mechanism, user control of object clustering in storage for space and retrieval efficiency, and support for trigger methods.Unique aspects of the system are its mechanisms for custom implementations of storage allocation and access methods of properties and types, and free operations, that is operations that are not dispatched according to any defined type.During the last several years, both languages and database systems have begun to incorporate object features. There are now many object-oriented programming languages. [Gol1983, Tes1985, Mey 1987, Cox 1986, Str 1986]. Object-oriented database management systems are not as prevalent yet, and sometimes tend to use different terms (Entity-Relationship, Semantic Data Model), but they are beginning to appear on the horizon [Cat1983, Cop1984, Ston1986, Mylo1980]. However, we are not aware of any system which combines both language and database features in a single object-oriented development platform. This is essential since a system must provide both complex data management and advanced programming language features if it is to be used to develop significant production software systems. Providing only one or the other is somewhat akin to providing half a bridge: it might be made structurally sound, perhaps, but it is not particularly useful to one interested in getting across the river safely.Object-oriented languages have been available for many years. The productivity increases achievable through the use of such languages are well recognized. However, few serious applications have been developed using them. One reason has been performance, though this drawback is being eliminated through the development of compiled object languages. The remaining major negative factor, in our view, is the lack of support for persistence; the lack of objects that survive the processing session and provide object sharing among multiple users of an application.Database management systems, in contrast, suffer from precisely the opposite problem. While having excellent facilities for managing large amounts of data stored on mass media, they generally support only limited expression capabilities, and no structuring facilities.Both language and database systems usually solve this problem by providing bridges between the systems. Thus the proliferation of 'embedded languages', allowing language systems to access database managers. These bridges are usually awkward, and still provide only restricted functionality. Both performance and safety can be enhanced through a tighter coupling between the data management and programming language facilities.It is this lack of a truly integrated system which provided our inspiration at Ontologic, Inc. This paper reviews Ontologic's VBASE Integrated Object System and describes how it combines language and database functionality.

283 citations


ReportDOI
01 Sep 1987
TL;DR: The POSTGRES as mentioned in this paper data model is a relational model that has been extended with abstract data types, data of type procedure, and attribute and procedure inheritance, which can be used to simulate a wide variety of semantic and object-oriented data modeling constructs including aggregation and generalization, complex objects with shared subobjects, and attributes that reference tuples in other relations.
Abstract: : This paper describes the data model for POSTGRES, a next-generation extensible database management system being developed at the University of California StR86 The data model is a relational model that has been extended with abstract data types, data of type procedure, and attribute and procedure inheritance These mechanisms can be used to simulate a wide variety of semantic and object-oriented data modeling constructs including aggregation and generalization, complex objects with shared subobjects, and attributes that reference tuples in other relations

223 citations


Journal ArticleDOI
TL;DR: This paper examines the data management requirements of group work applications on the basis of experience with three prototype systems and on observations from the literature, and database and object management technologies that support these requirements are briefly surveyed.
Abstract: Data sharing is fundamental to computer-supported cooperative work: People share information through explicit communication channels and through their coordinated use of shared databases. This paper examines the data management requirements of group work applications on the basis of experience with three prototype systems and on observations from the literature. Database and object management technologies that support these requirements are briefly surveyed, and unresolved issues in the particular areas of access control and concurrency control are identified for future research.

212 citations


Proceedings ArticleDOI
01 Dec 1987
TL;DR: The Datacycle architecture is introduced, an attempt to exploit the enormous transmission bandwidth of optical systems to permit the implementation of high throughput multiprocessor database systems.
Abstract: The evolutionary trend toward a database-driven public communications network has motivated research into database architectures capable of executing thousands of transactions per second. In this paper we introduce the Datacycle architecture, an attempt to exploit the enormous transmission bandwidth of optical systems to permit the implementation of high throughput multiprocessor database systems. The architecture has the potential for unlimited query throughput, simplified data management, rapid execution of complex queries, and efficient concurrency control. We describe the logical operation of the architecture and discuss implementation issues in the context of a prototype system currently under construction.

167 citations


Proceedings ArticleDOI
01 Dec 1987
TL;DR: The data management extension architecture is described, which provides common services for coordination of storage method and attachment execution and some implementation issues and techniques.
Abstract: A database management system architecture is described that facilitates the implementation of data management extensions for relational database systems. The architecture defines two classes of data management extensions alternative ways of storing relations called relation “storage methods”, and access paths, integrity constraints, or triggers which are “attachments” to relations. Generic sets of operations are defined for storage methods and attachments, and these operations must be provided in order to add a new storage method or attachment type to the system. The data management extension architecture also provides common services for coordination of storage method and attachment execution. This article describes the data management extension architecture along with some implementation issues and techniques.

101 citations


Journal ArticleDOI
TL;DR: The difference is that information technology can transform the way businesses collect and use data as mentioned in this paper, and applied imaginatively, it can be applied to data collection and use in the real world.
Abstract: A new regulation takes effect: some companies respond quickly; others can't. Budget guidelines are changed: some companies distribute them immediately; others can't. The difference is information technology. Applied imaginatively, it can transform the way businesses collect and use data.

71 citations


Journal ArticleDOI
TL;DR: The requirements for knowledge management in organizational planning are described and a knowledge-based planning system that has been implemented by the authors is presented and tools for describing, classifying, and storing the output of the planning process are described.
Abstract: :There is growing recognition that the ability to provide automated support for unstructured decision making within organizations will require the integration of knowledge-based expert system techniques and traditional decision support system architectures. Several knowledge representations are applicable in the specification, management, and communication of knowledge associated with organizational planning, a classic ill-structured problem facing organizations.This paper describes the requirements for knowledge management in organizational planning. A knowledge-based planning system that has been implemented by the authors is presented. The system integrates data management, model management, and process management systems within a group decision support system environment. Knowledge management tools for describing, classifying, and storing the output of the planning process are described. Use of the system in the Management Information Systems (mis) Planning and Decision Laboratory at the Unive...

70 citations


Journal ArticleDOI
01 Jun 1987
TL;DR: The workflow typical in a disaster scenario is analyzed and the design considerations for a virtual information center (VIC) that can both efficiently and effectively coordinate and process a large number of information requests for disaster preparation/management/recovery teams are discussed.
Abstract: There are innumerable human and organizational circumstances when free flowing information is essential for effective decision-making. In a closed system with limited boundary scanning, information handling is a fairly manageable task [School Library Journal, 39 (1993) 146]. However, where sources of data and/or decisions are high volume encompass a large geographic area and cover a gamut of organizational entities, information gathering and fusing can be daunting [FEMA, Publication No. 229 (4) (1995)]. This paper analyzes the workflow typical in a disaster scenario and discusses the design considerations for a virtual information center (VIC) that can both efficiently and effectively coordinate and process a large number of information requests for disaster preparation/management/recovery teams. The proposed design is domain independent, uses a net-centric approach and can be readily exported to many other governmental and organizational decision environments. The prototype version of the system uses the object-oriented model in connecting to multiple databases across the Internet and has all the essential features that can readily be cloned to enlarge the system's scope.

48 citations


Journal ArticleDOI
TL;DR: This computer system has greatly reduced the time necessary to analyze the data from a pharmacokinetic study and the reliability of the data is increased by eliminating data transposition errors and by forcing uniformity in sample processing.
Abstract: This article represents the first description of a complete pharmacokinetic data management system. This system uses an HP-3357 computer for data acquisition and an IBM mainframe for sample processing. This system is unique in that it provides sample management capabilities, automatic data collection, GLP documentation, and estimates of pharmacokinetic parameters. This computer system has greatly reduced the time necessary to analyze the data from a pharmacokinetic study. Furthermore, the reliability of the data is increased by eliminating data transposition errors and by forcing uniformity in sample processing.

41 citations


Proceedings ArticleDOI
01 Oct 1987
TL;DR: This work discusses object-orientation as more than an implementation paradigm, and shows how an object-oriented approach simplifies both use and implementation of engineering design systems.
Abstract: An object-oriented approach to management of engineering design data requires object persistence, object-specific rules for concurrency control and recovery, views, complex objects and derived data, and specialized treatment of operations, constraints, relationships and type descriptions. We discuss object-orientation as more than an implementation paradigm, and show how an object-oriented approach simplifies both use and implementation of engineering design systems.

Journal Article
TL;DR: There is increasing recognition of the need for a physician-level medical information specialist to serve as an institution's chief information officer, assuming responsibility for the collection, manipulation, and availability of all patient care-related data.
Abstract: The demands for information retrieval, processing, and synthesis placed on all providers of health care have increased dramatically in the last several decades. Although systems have been developed to capture charge-related data in support of cost reimbursement, there has been a conspicuous lack of attention paid to information tools to directly enhance the delivery of patient care. The termination of cost reimbursement, together with an increasing recognition of the problems inherent in current manual record-keeping systems, is creating a significant new focus on medical information. This change in focus requires a shift in systems orientation away from financial and departmentally centered systems and toward patient-centered approaches. There is thus increasing recognition of the need for a physician-level medical information specialist to serve as an institution's chief information officer, assuming responsibility for the collection, manipulation, and availability of all patient care-related data. By virtue of training, typical experience, hospital presence, and a noncompetitive position with the hospital's medical staff, the pathologist is uniquely suited for this position. To effectively perform this role, a variety of new specialized data management tools are becoming available. Integrated information systems, patient care management by exception, decision support tools, and, in the future, "artificial intelligence" assists can all be expected to become staples of pathology practice, especially impacting those pathologists who choose to be responsive to the new practice milieu of medical information science.

Proceedings ArticleDOI
01 Dec 1987
TL;DR: This paper first outlines the various types of information that fall under the purview of the proposed data manager, and considers extensions to the entity-relationship data model to implement the notion of hierarchical ordering, commonly found in musical data.
Abstract: As part of our research into a general purpose data management system for musical information, a major focus has been the development of tools to support a data model for music. This paper first outlines the various types of information that fall under the purview of our proposed data manager. We consider extensions to the entity-relationship data model to implement the notion of hierarchical ordering, commonly found in musical data. We then present examples from our schema for representing musical notation in a database, taking advantage of these extensions.

Journal ArticleDOI
TL;DR: This paper examines the function of long-range planning in an MIS organization with particular attention to the issue management process, and suggests ways in which other organizations contemplating issues management might develop, implement, and maintain this component of the overall planning process.
Abstract: Major corporations have tried in recent years to formalize planning processes in their MIS organizations in response to the growing importance of information processing to corporate business functions. This paper examines the function of long-range planning in an MIS organization with particular attention to the issue management process. The paper critiques the process, identifies both successes and difficulties, and suggests ways in which other organizations contemplating issues management might develop, implement, and maintain this component of the overall planning process.

Book
01 Jan 1987
TL;DR: This chapter discusses data base management, structural analysis, decision making and decision models, and minimum standards for Ethnography.
Abstract: Introduction PART ONE: DATA BASE MANAGEMENT AND ANALYSIS Data Base Management Structural Analysis Beyond Structural Analysis Plans and Decisions Decision Making and Decision Models Other Semantic Systems Analysis of Texts Ethnoscience and Statistics PART TWO: POST FIELDWORK Planning the Ethnographic Report Writing the Ethnographic Report PART THREE: EPILOGUE Minimum Standards for Ethnography


Journal ArticleDOI
TL;DR: ADAMO is a data management system for defining data and manipulating them from FORTRAN programs that combines a form of the Entity-Relationship model and the data flow diagrams of structured analysis to provide a system suited to algorithmic work.


Proceedings ArticleDOI
01 Dec 1987
TL;DR: The reclustering of Copy-Compact and the cost amortization of Reference Count are combined to great advantage in Baker's algorithm, which proves to be the least prohibitive for operating on disk-based data.
Abstract: When providing data management for nontraditional data, database systems encounter storage reclamation problems similar to those encountered by virtual memory managers. The paging behavior of existing automatic storage reclamation schemes as applied to objects stored in a database management system is one indicator of the performance cost of various features of storage reclamation algorithms. The results of modeling the paging behavior suggest that Mark and Sweep causes many more input/output operations than Copy-Compact. A contributing factor to the expense of Mark and Sweep is that it does not recluster memory as does Copy-Compact. If memory is not reclustered, the average cost of accessing data can go up tremendously. Other algorithms that do not recluster memory also suffer performance problems, namely all reference counting schemes. The main advantage of a reference count scheme is that it does not force a running program to pause for a long period of time while reclamation takes place, it amortizes the cost of reclamation across all accesses. The reclustering of Copy-Compact and the cost amortization of Reference Count are combined to great advantage in Baker's algorithm. This algorithm proves to be the least prohibitive for operating on disk-based data.

Journal ArticleDOI
TL;DR: A geotechnical database called Geoshare as mentioned in this paper was developed at Queen Mary College to act as a comprehensive data management tool for site investigation data, which is capable of storing and searching free format geo-chnical records.
Abstract: A geotechnical database called Geoshare has been developed at Queen Mary College to act as a comprehensive data management tool for site investigation data (Wood et al. 1982). It is capable of storing and searching ‘free format’ geotechnical records, and is designed to handle all aspects of data management in a site investigation project, from the input of the records to the analysis of the retrieved information. The application of Geoshare to the reconstruction of a superficial deposit stratigraphy is demonstrated, showing the use of the retrieval facilities as applied to borehole data collected in the site investigation for a trunk road scheme in S. Wales. The problems of poor borehole/trial pit log descriptions are dicusssed in the light of the use of Geoshare, and a software compatible terminological format for log descriptions is suggested, adapted from an engineering based revision of British Standard 5930 (1981).

Journal ArticleDOI
01 Dec 1987
TL;DR: The major issues pertaining to the integration of demon representation and processing into a knowledge management environment are explored to serve as a basis for design and implementation of more flexible and powerful environments for decision support.
Abstract: Decision support systems depend on a variety of knowledge management techniques. These range from data base management, programming, and spreadsheet analysis to rule set management and automated inference. One valuable knowledge management technique that has yet to find its way into the repertoire of decision support system developers is general-purpose demon management. This article identifies and explores the major issues pertaining to the integration of demon representation and processing into a knowledge management environment. These serve as a basis for design and implementation of more flexible and powerful environments for decision support.

Journal ArticleDOI
TL;DR: It is suggested that the integration of data and knowledge management might enhance the overall acceptance by medical staff of a computerised system, and facilitate the validation of a knowledge base.
Abstract: An expert system has been integrated to the data management system of the ARTEMIS programme for hypertensive patients. The patient database, which has been used since 1975, contains the medical records of about 20,000 patients. Information is interactively entered by physicians, nurses and secretaries on video display units. The computerised medical record has replaced the traditional handwritten medical record. The database management system is used to produce different summary reports (inpatient and outpatient care) and personalized recall letters which are mailed to the patients before their appointments. Suggestions provided by the expert system include additional information to be obtained (complementary patient interrogation, biological or radiological investigations, etc.), possible causes of hypertension, and medical prescriptions. The information base allows the description of both static knowledge (in the form of a semantic network) and dynamic knowledge (in the form of production rules). The inference system sequentially uses a combination of forward and backward chaining and performs both exact and approximate reasoning. The diagnostic performance of the expert system was evaluated in 100 cases of hypertension (50 of essential hypertension and 50 of secondary hypertension. Concordance between the diagnosis proposed by the expert system and the one proposed by the specialist was achieved in 92% of secondary hypertension cases and 88% of essential hypertension cases. It is suggested that the integration of data and knowledge management might enhance the overall acceptance by medical staff of a computerised system, and facilitate the validation of a knowledge base.


Journal ArticleDOI
01 Dec 1987-Poetics
TL;DR: The usefulness of a number of descriptive and statistical methods for analysis are illustrated on the basis of a sample of investigations current at the department of the sociology of literature at Tilburg University.

Book
01 Jan 1987
TL;DR: This dissertation explores various issues related to the application of computer data management techniques to musical information, particularly those remaining issues that need be addressed in order to develop a functional database system appropriate to the management of musical information.
Abstract: This dissertation explores various issues related to the application of computer data management techniques to musical information. The contribution of this work is twofold: (1) It extends an existing data model (the entity-relationship model) to support a database schema for musical information. (2) It develops particular data management access methods to effectively manipulate information in the musical database. Various types of musical information are analyzed, focusing particularly on the conceptual representations necessary to formally encode musical scores expressed in Common Musical Notation (CMN). The entity-relationship model is taken as a starting point for a CMN database schema. A new feature is added to the entity-relationship model to represent the notion of ordered sets of entities. This property is called hierarchical ordering. Such a relationship occurs, for example, when an ordered set of notes constitutes a chord, or an ordered set of measures forms a movement of a composition. A method is proposed for representing attribute inheritance among entities, and various approaches to the problem of managing this inheritance among entities within the music database are considered. Several examples demonstrate that inheritance under hierarchical ordering is more complex than that supported by standard generalization hierarchies. Using these two data modeling tools, hierarchical ordering and attribute inheritance, a schema is developed for CMN. Entities from the CMN schema are divided into groups according to various aspects of the information: temporal, timbral, and graphical. To implement hierarchical ordering using existing relational database technology, extensions to relational access methods are developed. Entity ordering is supported by the introduction of ordered relations. A data structure is developed for ordered relations which provides a general solution for the support of user-defined aggregate functions computed over ordered sets, and user-defined orderings, including hierarchical orderings. The dissertation closes with a summary of questions left open by the present research, particularly those remaining issues that need be addressed in order to develop a functional database system appropriate to the management of musical information.



Journal ArticleDOI
TL;DR: The details of a microcomputer-based, distributed data management system in the acquisition of data collected in a large multicentered cooperative investigation of transfusion-transmitted acquired immunodeficiency syndrome (AIDS) are described.

Book
01 Feb 1987
TL;DR: Books, as a source that may involve the facts, opinion, literature, religion, and many others are the great friends to join with, become what you need to get.
Abstract: New updated! The latest book from a very famous author finally comes out. Book of introduction to database management a practical approach, as an amazing reference becomes what you need to get. What's for is this book? Are you still thinking for what the book is? Well, this is what you probably will get. You should have made proper choices for your better life. Book, as a source that may involve the facts, opinion, literature, religion, and many others are the great friends to join with.

Journal ArticleDOI
TL;DR: The Scheduled Measurement System is a collection of related programs for administering performance and rating tasks to human subjects that was designed to provide a common task scheduling and data management environment for a diversity of measures.
Abstract: The Scheduled Measurement System (SMS) is a collection of related programs for administering performance and rating tasks to human subjects It was designed to provide a common task scheduling and data management environment for a diversity of measures SMS has greatly reduced the effort required to analyze and archive data after experiments, but it has been less successful in reducing some of the effort required to implement experiments While programming is not difficult, documentation and user training remain the most primitive parts of the system