scispace - formally typeset
Search or ask a question

Showing papers on "Data access published in 1991"


Proceedings ArticleDOI
01 Aug 1991
TL;DR: In this article, a new hardware prefetching scheme based on the prediction of the execution of the instruction stream and associated operand references is proposed. But this scheme requires the use of a reference prediction table and its associated logic.
Abstract: Conventional cache prefetching approaches can be either hardware-based, generally by using a one-blockIookahead technique, or compiler-directed, with insertions of non-blocking prefetch instructions. We introduce a new hardware scheme based on the prediction of the execution of the instruction stream and associated operand references. It consists of a reference prediction table and a look-ahead program counter and its associated logic. With this scheme, data with regular access patterns is preloaded, independently of the stride size, and preloading of data with irregular access patterns is prevented. We evaluate our design through trace driven simulation by comparing it with a pure data cache approach under three different memory access models. Our experiments show that this scheme is very effective for reducing the data access penalty for scientific programs and that is has moderate success for other applications.

458 citations


Journal ArticleDOI
TL;DR: This article presents a scenario for the future of research access to federally collected microdata, as they relate to improvements in database techniques, computer and analytical method- ologies and legal and administrative arrangements for access to and protection of federal statistics.
Abstract: This article presents a scenario for the future of research access to federally collected microdata. Many researchers find access to government databases increasingly desirable. The databases themselves are more comprehensive, of better quality and-with improved database management techniques-better structured. Advances in computer com- munications enable remote access to these databases. Substantial gains in the performance/cost ratio of computers permit more sophisticated analyses-including ones based on statistical graphics, identification of extreme or influential values, record linkage and Bayesian regression methods. At the same time, the individuals and institutions that provide the data residing on government databases-as well as the agencies who sponsor the collection of such information-are becoming increasingly aware that the same technologies that extend analytical capabilities also furnish tools that threaten the confidentiality of data records. As the broker between the data provider and the data user, govern- ment agencies are under increased pressure to implement policies that both increase data access and ensure confidentiality. In response to these cross-pressures, agencies will more actively pursue statistical, administrative and legal approaches to responsible data dissemination. Recent developments in these approaches are discussed as they relate to improvements in database techniques, computer and analytical method- ologies and legal and administrative arrangements for access to and protection of federal statistics.

172 citations


Journal ArticleDOI
01 Dec 1991
TL;DR: To solve the problem in interdatabase data manipulation within a heterogeneous environment, a broader definition for join operator is proposed and a method to probabilistically estimate the accuracy of the join is discussed.
Abstract: Many important information systems applications require access to data stored in multiple heterogeneous databases. This paper examines a problem in interdatabase data manipulation within a heterogeneous environment, where conventional techniques are no longer useful. To solve the problem, a broader definition for join operator is proposed. Also, a method to probabilistically estimate the accuracy of the join is discussed.

108 citations


Proceedings ArticleDOI
01 Sep 1991
TL;DR: This paper examines alternative data access microarchitectures that effectively support compilerassisted data prefetching in superscalar processors and shows that a small data cache with compiler-assisted data preferences can achieve a performance level close to that of an ideal cache.
Abstract: The performance of superscrdar processors is more sensitive to the memory system delay than their single-issue predecessors. This paper examines alternative data access microarchitectures that effectively support compilerassisted data prefetching in superscalar processors. In particular, a prefetch buffer is shown to be more effective than increasing the cache dimension in solving the cache pollution problem. All in all, we show that a small data cache with compiler-assisted data prefetching can achieve a performance level close to that of an ideal cache.

105 citations


Journal Article
TL;DR: This paper focuses on the issues of making data available and useful to the user from the viewpoint of the functions which must be provided by archives of spatial data.
Abstract: The integration of remote sensing tools and technology with the spatial analysis orientation of geographic information systems is a complex task. In this paper, we focus on the issues of making data available and useful to the user. In part, this involves a set of problems which reflect on the physical and logical structures used to encode the data. At the same time, however, the mechanisms and protocols which provide information about the data, and which maintain the data through time, have become increasingly important. We discuss these latter issues from the viewpoint of the functions which must be provided by archives of spatial data.

85 citations


Patent
25 Apr 1991
TL;DR: In this paper, a system and method for developing specialized data processing systems for tracking items through a business process is presented, which allows rapid creation of a specific data processing system based upon a series of generic process rules previously developed and stored in the system.
Abstract: A system and method for developing specialized data processing systems for tracking items through a business process The method allows rapid creation of a specific data processing system based upon a series of generic process rules previously developed and stored in the system Process activity definitions, activity paths transitions, data access, and operator interaction panels are defined Based upon the user supplied inputs and generic rules, a complete data processing system is generated In operation, the generated system operates using a process flow controller to manage the processing steps Each item being tracked through the system has an associated item status The controller is conditionally responsive to item state and item data content when determining the activities are available for selection and the processes and their authorized operators The present invention implements state sensitive process automation in that the tasks are assigned and processed only when such assignment is indicated by the current item state and associated data content

69 citations


Patent
Chandrasekaran Mohan1
28 Mar 1991
TL;DR: In this article, the authors propose to enable new transactions to acquire data during restart recovery UNDO processing on the condition that the last update to the data occurred before a commit point measured by the beginning of the earliestcommencing transaction with uncommitted updates which was still executing when a system failure initiated restart recovery operations.
Abstract: Enhanced data availability occurs in a write-ahead logging, transaction-oriented database system by permitting new transactions to acquire access to data while restart recovery operations are proceeding. The invention permits new transactions to acquire access to data during restart recovery UNDO processing on the condition that the last update to the data occurred before a commit point measured by the beginning of the earliest-commencing transaction with uncommitted updates which was still executing when a system failure initiated restart recovery operations. During REDO processing, a transaction is permitted access to data which, in addition to meeting the commit point condition, is not in a data structure subject to the REDO processing.

64 citations


Patent
22 Oct 1991
TL;DR: In this paper, a method for arbitrating access by a plurality of agents to a bus utilizes a priority access list, which indicates the agent's relative priority level of access to the bus.
Abstract: A method for arbitrating access by a plurality of agents to a bus utilizes a priority access list. Each agent in the plurality of agents has a position on the priority access list. This position indicates the agent's relative priority level of access to the bus. When at least one agent from the plurality of agents requests access the bus, bus access is granted to the agent among the requesting agents which is highest on the priority access list. Once an agent from the plurality of agents has gained access to the bus, the agent which gained access to the bus is moved to the bottom of the priority access list.

64 citations


Proceedings ArticleDOI
30 Sep 1991
TL;DR: The desire to maintain ready access to data files covering the entire history of Tokamak Fusion Test Reactor has created a need for more economical methods of mass storage, which involved the purchase of a SONY optical disk subsystem with an auto changer (the 'Jukebox') which provides 160 Gbytes of online storage.
Abstract: The desire to maintain ready access to data files covering the entire history of Tokamak Fusion Test Reactor (TFTR) has created a need for more economical methods of mass storage. This need was addressed through a combined hardware and software approach. This involved the purchase of a SONY optical disk subsystem with an auto changer (the 'Jukebox') which provides 160 Gbytes of online storage. The software constitutes a transparent layer integrating the Jukebox into the existing data file access methods. The system caches files on magnetic disks, which are managed automatically, so that they never exceed capacity. Newly created files are copied to the Jukebox for long-term storage and to magnetic tapes for backup purposes. The data remain on magnetic disks until a need for more space causes them to be skimmed according to frequency of use. Users can access any skimmed file just as though it were still on the magnetic cache. The system restores the file transparently within seconds, without need for manual intervention. >

38 citations


Journal ArticleDOI
TL;DR: The purpose of this paper is to show how an automated solution to integrate cost and schedule control functions and provide distributed access to data for different processing needs was properly designed and developed.
Abstract: Effective management of construction projects depends on good access to and control of data, especially data pertaining to cost and schedule control functions. Long recognizing the need to integrate these interrelated functions, researchers have proposed conceptual models to achieve this purpose. However, the proper design of an automated solution that would support the needs of integrated cost and schedule control and provide distributed access to data for different processing needs has been lacking. The purpose of this paper is to show how such an automated solution (represented by a data storage model) was properly designed and developed. The data storage model is developed using the first three steps of the process used to model engineering problems. The first step, problem definition, was accomplished using the work packaging model to solve the integration problem and by using data collection forms developed by R. S. Means Company to identify what data items are collected in cost and schedule control...

34 citations


Proceedings ArticleDOI
01 Aug 1991
TL;DR: Nomenclator is an architecture for providing efficient descriptive (attribute-based) naming in a large internet environment that will eventually incorporate other name services in addition to X.500 as its underlying data repository.
Abstract: Nomenclator is an architecture for providing efficient descriptive (attribute-based) naming in a large internet environment. As a test of the basic design, we have built a Nomenclator prototype that uses X.500 as its underlying data repository. X.500 SEARCH queries that previously took several minutes, can, in many cases, be answered in a matter of seconds. Our system improves descriptive query performance by trimming branches of the X.500 directory tree from the search. These tree-trimming techniques are part of an active catalog that constrains the search space as needed during query processing. The active catalog provides information about the data distribution (meta-data) to constrain query processing on demand. Nomenclator caches both data (responses to queries) and meta-data (data distribution information, tree-trimming techniques, data access techniques) to speed future queries. Nomenclator relieves users of the need to understand the structure of the name space to locate objects quickly in a large, structured name environment. Nomenclator is a meta-level service that will eventually incorporate other name services in addition to X.500. Its techniques for improving performance should be generally applicable to other naming systems. hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh Research supported in part by an AT&T Ph.D. Scholarship, National Science Foundation grants CCR8703373 and CCR-8815928, Office of Naval Research grant N00014-89-J-1222, and a Digital Equipment Corporation External Research Grant.

Book ChapterDOI
01 Apr 1991
TL;DR: This paper describes SIMPLE: a performance evaluation tool environment for parallel and distributed systems based on monitoring of concurrent interdependent activities that uses the data access interface TDL/POET which makes the evaluation independent of the monitor device used and the system monitored.
Abstract: This paper describes SIMPLE: a performance evaluation tool environment for parallel and distributed systems based on monitoring of concurrent interdependent activities. We emphasize the tool environment as a prerequisite for successful performance evaluation. All tools use the data access interface TDL/POET which can decode measured data of arbitrary structure, format and representation. This makes the evaluation independent of the monitor device(s) used and the system monitored. It also provides a problem-oriented way of accessing the data. Therefore it is very easy to adapt SIMPLE to any kind of measured data and to understand the evaluation results.

Journal ArticleDOI
TL;DR: It is shown that certain general rearrangement rules can be modified to reduce significantly the number of data moves, without affecting the asymptotic cost of a data access.
Abstract: We consider self-organizing data structures when the number of data accesses is unknown. We show that certain general rearrangement rules can be modified to reduce significantly the number of data moves, without affecting the asymptotic cost of a data access. As a special case, explicit formulae are given for the expected cost of a data access and the expected number of data moves for the modified move-to-front rules for linear lists and binary trees. Since a data move usually costs at least as much as a data access, the modified rule eventually leads to a savings in total cost (the sum of data accesses and moves).

Patent
23 Jan 1991
TL;DR: A portable device for the electronic display of reading and reference material to allow for an alternative to books and other paper based presentation formats is presented in this paper.This device provides the capability to access and retrieve text, data and graphics from previously encoded units of computer addressable storage media, and transmit these data to an integrated monochrome or colour video screen.
Abstract: A portable device for the electronic display of reading and reference material to allow for an alternative to books and other paper based presentation formats. Provides the capability to access and retrieve text, data and graphics from previously encoded units of computer addressable storage media, and transmit these data to an integrated monochrome or colour video screen. The data will reside on small pre-prepared units of high volume data storage media (e.g. 3 1/2 inch computer diskettes), which are individually loaded into the device as required. User controlled functions allow predefined groups of data, i.e. pages, to be accessed and displayed sequentially forward or reverse, by page number, or by the specific page which was last displayed on individual data storage units through an electronic placemark function. Display functions include a capability to increase the size of the data characters on the viewscreen, to allow the device to be used by the visually impaired, and controls for adjustment of the visibility attributes (e.g. resolution, contrast, colour, tint, etc.). Protection and support is provided by a combination cover and adjustable stand. The device also incorporates a carrying handle to enhance portability. This invention is distinguished from and improves upon desktop or larger computers by: read only, non-programmable and non-programming functionality; simplicity of controls and operation; and portability. It is distinguished from and improves upon portable computers by: read only, non-programmable and non-programming functionality; simplicity of controls and operation; and lower cost. This invention is distinguished from and improves upon print and paper, or other non-electronic text and data output methods by: providing the capability of access to data residing on storage media which has far less volume and weight per unit of data; durability; text and data size adjustment capability; internal illumination; user convenience; and potential for the reduction of environmentally hazardous pulp and paper requirements.

Journal ArticleDOI
TL;DR: This report describes the results of a survey of 36 companies in the Chicago, New York City, and South Florida regions, with two goals: to examine database administration functions today, in terms of their organizations, reporting structures, and responsibilities, as they stand on the brink of this exciting, new era of end-user data access.
Abstract: Database administration has traditionally been a function that has been internal to the data processing organization. Contact with corporate personnel outside of data processing has always been limited. But, at this point in the evolution of data processing, end-users, with a variety of new tools at their disposal, are showing a great deal of interest in direct data access. This report describes the results of a survey of 36 companies in the Chicago, New York City, and South Florida regions, with two goals. One was to examine database administration functions today, in terms of their organizations, reporting structures, and responsibilities, as they stand on the brink of this exciting, new era of end-user data access. The other was to learn from the database administration managers the directions in which their organizations are heading, in this context.

Journal ArticleDOI
TL;DR: A radical technology for databases, which implements a relational model for network services and scales to support throughput of thousands of transactions per second is proposed, and a set of data manipulation primitives useful in describing the logic network services is described.
Abstract: A radical technology for databases, called the Datacycle architecture, which implements a relational model for network services and scales to support throughput of thousands of transactions per second is proposed. A set of data manipulation primitives useful in describing the logic network services is described. The use of the relational model together with an extended structured-query-language-like query language to describe 800 service, network automatic call distribution, and directory-assisted call completion services, is examined. The architectural constraints on the scalability of traditional database systems is reviewed, and an alternative, the Datacycle architecture is presented. The Datacycle approach exploits the bandwidth of fiber optics to circulate database contents among processing nodes (e.g. switching offices or other network elements) in a network, providing highly flexible access to data and controlling the administrative and processing overhead of coordinating changes to database contents. A prototype system operating in the laboratory is described. The feasibility of the Datacycle approach for both existing and future applications is considered. >

Journal ArticleDOI
Lloyd A. Treinish1, C. Goettsche
TL;DR: A generalized approach to data visualization is critical for the correlative analysis of distinct, complex, multidimensional data sets in the space and Earth sciences.
Abstract: Critical to the understanding of data is the ability to provide pictorial or visual representation of those data, particularly in support of correlative data analysis. Despite the advancement of visualization techniques for scientific data over the last several years, there are still significant problems in bringing today's hardware and software technology into the hands of the typical scientist. For example, there are other computer science domains outside of computer graphics that are required to make visualization effective such as data management. Well-defined, flexible mechanisms for data access and management must be combined with rendering algorithms, data transformation, etc. to form a generic visualization pipeline. A generalized approach to data visualization is critical for the correlative analysis of distinct, complex, multidimensional data sets in the space and Earth sciences. Different classes of data representation techniques must be used within such a framework, which can range from simple, static two- and three-dimensional line plots to animation, surface rendering, and volumetric imaging. Static examples of actual data analyses will illustrate the importance of an effective pipeline in data visualization system.

Journal ArticleDOI
Krishnan Padmanabhan1
TL;DR: Interconnection structures that can provide access to multiple levels of a shared memory hierarchy in a multiprocessor are investigated, illustrating that without resorting to separate networks for access at each level, several architectures can provide fast access at tower levels in the hierarchy and progressively slower access at higher levels.

Proceedings Article
01 Jan 1991
TL;DR: Several potential integration approaches are described as they relate to the DHCP Imaging System, a multi-specialty application that integrates multimedia data to provide clinicians with comprehensive patient-oriented information.
Abstract: The VA's hospital information system, the Decentralized Hospital Computer Program (DHCP), is an integrated system based on a powerful set of software tools with shared data accessible from any of its application modules. It includes many functionally specific application subsystems such as laboratory, pharmacy, radiology, and dietetics. Physicians need applications that cross these application boundaries to provide useful and convenient patient data. One of these multi-specialty applications, the DHCP Imaging System, integrates multimedia data to provide clinicians with comprehensive patient-oriented information. User requirements for cross-disciplinary image access can be studied to define needs for similar text data access. Integration approaches must be evaluated both for their ability to deliver patient-oriented text data rapidly and their ability to integrate multimedia data objects. Several potential integration approaches are described as they relate to the DHCP Imaging System.

Journal ArticleDOI
TL;DR: In this paper, the authors present an access control mechanism for distributed heterogeneous database management systems (DHDBMSs) that provides users with uniform and controlled access to information stored in different databases without having to deal with a variety of data formats, access protocols and interfaces.

Proceedings ArticleDOI
08 Jan 1991
TL;DR: An object-oriented modeling system (ELISA) is introduced in which the primitives of the ISA correspond to the natural language used by managers and executives to facilitate the integration and sharing of data.
Abstract: Presents an object-oriented design approach for modeling an information systems architecture (ISA). An ISA is a plan for modeling the global information requirements of an enterprise and provides a way to map the information needs of an organization, relate them to specific processes and document their interrelationships. This mapping is then used to guide applications development and to facilitate the integration and sharing of data. To facilitate these functions, an object-oriented modeling system (ELISA) is introduced in which the primitives of the ISA correspond to the natural language used by managers and executives. The research reported is a prototype design that focuses on a representation, storage, access, and manipulation scheme of enterprise meta-data. A repository system is described for the storage of information required by the ISA. An example shows how an object representation of the ISA is applied to the information requirements of a small business. Directions for further research are also discussed. >

Proceedings Article
01 Jan 1991
TL;DR: An object-oriented framework is presented that offers integration of various types of entities at one workstation that is used to implement a prototype medical workstation for the support of clinical data analysis.
Abstract: An object-oriented framework is presented that offers integration of various types of entities at one workstation. Five types of entities are distinguished: data, knowledge, functions, presentation forms and hardware, and for each of these entities an 'accessor' is introduced. An accessor offers abstraction from the particularities of access to the entities. For the interaction with this framework a programming language has been defined. A restricted form of the framework has been used to implement a prototype medical workstation for the support of clinical data analysis.

Proceedings Article
28 Oct 1991
TL;DR: This report describes some of the work in progress as part of the Universal Name Semantics project at the University of Michigan, intended to explore issues involved in providing client programs with seamless naming of objects across heterogeneous name spaces.
Abstract: Much of recent work on computer systems has focused on providing transparent resource-sharing in a distributed computing environment. Many of these systems use the server-client model to provide access to data and services. As more distributed services are offered and the demand for sharing increases in these environments, efficient management and accessing schemes become crucial. Locating services makes name service a critical part of access management.This report describes some of the work in progress as part of the Universal Name Semantics (UNS) project at the University of Michigan. UNS was intended to explore issues involved in providing client programs with seamless naming of objects across heterogeneous name spaces. UNS is distinguished from other heterogeneous naming systems in its attempt to partially automate the translation process by exploiting abstract similarity between name spaces.

Patent
05 Mar 1991
TL;DR: In this article, the authors propose to reduce processings required for conversion at the time of writing data from a main storage into a file by mutually converting data on the main storage with data on file only by means of the conversion of an absolute address and an offset.
Abstract: PURPOSE:To considerably reduce processings required for conversion at the time of writing data from a main storage into a file by mutually converting data on the main storage with data on the file only by means of the conversion of an absolute address and an offset. CONSTITUTION:An external storage 6 stores multiple objects so as to constitute a knowledge data base. An input/output control part 10 is constituted to a terminal equipment 7, CPU 8 and the main storage 9 and the control part 10 functions an input/output interface connecting an electronic computer 1 with an input/output device such as the terminal equipment 7. An external storage control part 11 is connected to the device 6, CPU 8 and the main storage 9, and controls the input/output of knowledge data between the main storage 9 and the device 6. A knowledge base system (Jasmine) 12 being the core of the knowledge data base is stored in the main storage 9 in the computer 1, and CPU 8 executes various processings to the knowledge base in accordance with the program. Further, a part of knowledge data 13 is stored in the main storage 9, but conversion from data 13 into data stored in the device 6 comes into question.

Patent
23 Apr 1991
TL;DR: In this paper, the back-up operation for each file at the cylinder address side of an auxiliary storage device to which a random access is possible is considered to minimize the seeking frequency between cylinders and to shorten the processing time.
Abstract: PURPOSE:To minimize the seeking frequency at a data access by having an access to the back-up operation for each file at the cylinder address side of an auxiliary storage device to which a random access is possible. CONSTITUTION:A file designating means 1 is provided together with a file control table reading means 2, a file control table reference means 5, a data input means 6, a data output means 8, and a data control means 7. An address sorting means 3 sorts the addresses stored in a file control table so that the data accesses of files can be carried out in the order of addresses stored in an auxiliary storage device 10 to which a random access is possible. Furthermore a control information production means 4 is added to produce the matching information on the logical arrangement of the data to be saved by the means 8 and the actual file data. Thus it is possible to minimize the seeking frequency between cylinders and to shorten the processing time.

Patent
19 Feb 1991
TL;DR: In this paper, a hierarchical information between blocks can be extracted automatically by an access control part, and the extracted hierarchical information can be managed by a meta-data control part in a data base managing system.
Abstract: PURPOSE:To constitute the system so that hierarchical information between blocks can be extracted automatically by an access control part, and the extracted hierarchical information can be managed. CONSTITUTION:A data base managing system 1 reads in a structure definition of user data from a meta-data base 15 at the time of actuation of an application program, and holds it in a meta-data table 9 in a meta-data control part 8. When store of data is started, a management data operating part 4 reads in a management data file 13 through a management data access part 10 and determines a store position of the target user data. A data operating part 6 writes in the user data to a data base 14 through a data base access part 11, while referring to information of the meta-data table 9. In this case, when modified information extraction data exists in meta-data, a hierarchical information extracting part 7 generates a request of hierarchical information store to a hierarchical information store part. The hierarchical information store part 5 writes hierarchical information in the management data file 13.

Patent
18 Jan 1991
TL;DR: In this paper, a method for compressing user data for storing on tape is described, comprising the steps of: receiving a stream of user data words organized into a plurality of records (CRn); compressing the user data according to a compression algorithm involving converting at least some of the users data to codewords using a dictionary which is derived from the data; characterised by carrying out a pluralityof flush operations (F) between starting consecutive dictionaries.
Abstract: A method for compressing user data for storing on tape (10) comprising the steps of: receiving a stream of user data words organised into a plurality of records (CRn); compressing the user data according to a compression algorithm involving converting at least some of the user data to codewords using a dictionary which is derived from the data; characterised by carrying out a plurality of flush operations (F) between starting consecutive dictionaries.

01 Aug 1991
TL;DR: To solve the problem of inter-database information retrieval in a heterogeneous environment, broader definitions for join, union, intersection and selection operators are proposed and a probabilistic method to specify the selectivity of these operators is discussed.
Abstract: During the post decade, organizations have increased their scope and operations beyond their traditional geographic boundaries. At the same time, they have adopted heterogeneous and incompatible information systems independent of each other without a careful consideration that one day they may need to be integrated. As a result of this diversity, many important business applications today require access to data stored in multiple autonomous databases. This paper examines a problem of inter-database information retrieval in a heterogeneous environment, where conventional techniques are no longer efficient. To solve the problem, broader definitions for join, union, intersection and selection operators are proposed. Also, a probabilistic method to specify the selectivity of these operators is discussed. An algorithm to compute these probabilities is provided in pseudocode.

Proceedings ArticleDOI
02 Dec 1991
TL;DR: The author introduces queueing network models for the optimization of the concurrent evaluation of complex database queries, which occur e.g. in deductive query processing.
Abstract: The author introduces queueing network models for the optimization of the concurrent evaluation of complex database queries, which occur e.g. in deductive query processing. The basic principle for the optimization is the reduction of response time by a better load distribution. If data access leads to a bottleneck on any CPU, response time may possibly be reduced by a preceding dynamic replication of one or more tables, whose accesses make up the bottleneck, to a disc managed by a lighter loaded CPU. For the comparison of alternative concurrent execution plans only queueing network models give quantitative results, which consider the contention on resources like CPU or disc. >

Journal ArticleDOI
TL;DR: The elements of distributed data base technology to help IS managers control the integration of distributedDataBase management systems in their organizations are described.
Abstract: To keep pace with rapid changes in corporate business requirements, organizations have begun to integrate distributed data base technology into their information architectures. Distributed data base management systems (DDBMSs) in particular promote the departmental sharing of data as well as easy access to data in a distributed environment, both key aspects of any data management strategy. This article describes the elements of distributed data base technology to help IS managers control the integration of distributed data bases in their organizations.