scispace - formally typeset
Search or ask a question

Showing papers on "Data access published in 1996"


Patent
02 Apr 1996
TL;DR: In this article, a machine-readable printed symbol can be a bar code or in the form obtainable with any other printed encoding technology which encodes digital information in printed form so that it can be electronically read.
Abstract: Access to data resources on data communications networks is simplified by encoding data resource identifiers into a machine-readable printed symbol which can be scanned into a computer-based data communications terminal. The machine-readable printed symbol can be a bar code or in the form obtainable with any other printed encoding technology which encodes digital information in printed form so that it can be electronically read. Once the symbolic representation of the data resource specifier is read into the computer, software running on the computer can use a data resource identifier to access internet resources. Various features are directed to compressing the size of the data resource identifier to fit within a short symbol such as a bar code on a business card.

282 citations


Proceedings ArticleDOI
01 Dec 1996
TL;DR: The authors derive a set of auxiliary views such that the warehouse view and the auxiliary views together are self-maintainable-they can be maintained without going to the data sources or replicating all base data.
Abstract: A data warehouse stores materialized views over data from one or more sources in order to provide fast access to the integrated data, regardless of the availability of the data sources. Warehouse views need to be maintained in response to changes to the base data in the sources. Except for very simple views, maintaining a warehouse view requires access to data that is not available in the view itself. Hence, to maintain the view, one either has to query the data sources or store auxiliary data in the warehouse. The authors show that by using key and referential integrity constraints, one often can maintain a select-project-join view without going to the data sources or replicating the base relations in their entirety in the warehouse. They derive a set of auxiliary views such that the warehouse view and the auxiliary views together are self-maintainable-they can be maintained without going to the data sources or replicating all base data. In addition, their technique can be applied to simplify traditional materialized view maintenance by exploiting key and referential integrity constraints.

263 citations


Proceedings ArticleDOI
01 Jun 1996
TL;DR: The goal of the Garlic project is to build a multimedia information system capable of integrating data that resides in different database systems as well as in a variety of non-database data servers, while maintaining the independence of the data servers.
Abstract: The goal of the Garlic [1] project is to build a multimedia information system capable of integrating data that resides in different database systems as well as in a variety of non-database data servers. This integration must be enabled while maintaining the independence of the data servers, and without creating copies of their data. "Multimedia" should be interpreted broadly to mean not only images, video, and audio, but also text and application specific data types (e.g., CAD drawings, medical objects, …). Since much of this data is naturally modeled by objects, Garlic provides an object-oriented schema to applications, interprets object queries, creates execution plans for sending pieces of queries to the appropriate data servers, and assembles query results for delivery back to the applications. A significant focus of the project is support for "intelligent" data servers, i.e., servers that provide media-specific indexing and query capabilities [2]. Database optimization technology is being extended to deal with heterogeneous collections of data servers so that efficient data access plans can be employed for multi-repository queries.A prototype of the Garlic system has been operational since January 1995. Queries are expressed in an SQL-like query language that has been extended to include object-oriented features such as reference-valued attributes and nested sets. In addition to a C++ API, Garlic supports a novel query/browser interface called PESTO [3]. This component of Garlic provides end users of the system with a friendly, graphical interface that supports interactive browsing, navigation, and querying of the contents of Garlic databases. Unlike existing interfaces to databases, PESTO allows users to move back and forth seamlessly between querying and browsing activities, using queries to identify interesting subsets of the database, browsing the subset, querying the content of a set-valued attribute of a particularly interesting object in the subset, and so on.

158 citations


Patent
19 Jul 1996
TL;DR: In this paper, groupers are introduced to increase the efficiency of data retrieval for a common set of queries and further reduce potential abuse of information, while allowing query terminals to retrieve (part of) the stored data or learn properties.
Abstract: An information storage system includes one or more information update terminals, a mapper, one or more partial-databases, and one or more query terminals, exchanging messages over a set of communication channels. An identifier-mapping mechanism provides (to an update terminal) a method for delegating control over retrieval of the data stored at the partial-databases to one or more mappers, typically operated by one or more trusted third parties. Update terminals supply information, that is stored in fragmented form by the partial-databases. Data-fragment identifiers and pseudonyms are introduced, preventing unauthorized de-fragmentation of information--thus providing compliance to privacy legislation--while at the same time allowing query terminals to retrieve (part of) the stored data or learn properties of the stored data. The mapper is necessarily involved in both operations, allowing data access policies to be enforced and potential abuse of stored information to be reduced. Introduction of multiple mappers acts to distribute information retrieval control among multiple trusted third parties. Introducing so-called `groupers` increases the efficiency of data retrieval for a common set of queries and further reduces potential abuse of information.

149 citations


Patent
Kazutoshi Shimada1
27 Dec 1996
TL;DR: In this article, an attribute-data extraction unit extracts location data and a password from attribute data which is added in advance to subject data to be accessed, at the time of requesting for an access to the data, the extracted password is compared with a password which is inputted from an input unit, and the extracted location data are compared with current location data detected by a location data detection unit.
Abstract: Upon data access, an attribute-data extraction unit extracts location data and a password from attribute data which is added in advance to subject data to be accessed. At the time of requesting for an access to the data, the extracted password is compared with a password which is inputted from an input unit, and the extracted location data is compared with current location data detected by a location-data detection unit. An access permission unit permits access to the data in accordance with the comparison results obtained by the password comparison unit and the location-data comparison unit. By virtue of the process, it is possible to more strictly protect confidential information in a data processing apparatus.

133 citations


Patent
09 Dec 1996
TL;DR: In this article, a method and system of monitoring throughput of a data access system includes logging each transfer of data from a content server to a remote site, with each log entry including information indicative of transfer size, date, times, source and destination.
Abstract: A method and system of monitoring throughput of a data access system includes logging each transfer of data from a content server to a remote site, with each log entry including information indicative of transfer size, date, times, source and destination. The method includes accessing the log information in a passive and non-intrusive manner to evaluate the performance of transfers to a selected subset of the remote sites. In another embodiment, the performance evaluation is implemented for system resource allocation planning. In the preferred embodiment, the data access system is a broadband data system and the content servers utilize Internet applications. Also in the preferred embodiment, the data throughput is measured by the transfer rate of useful data, rather than all data including retransmissions.

110 citations


03 Oct 1996
TL;DR: The goal of the Olden project is to build a system that provides parallelism for general-purpose C programs with minimal programmer annotations, and a prototype of Olden is implemented on the Thinking Machines CM-5.
Abstract: The goal of the Olden project is to build a system that provides parallelism for general-purpose C programs with minimal programmer annotations. We focus on programs using dynamic structures such as trees, lists, and DAGs. We describe a programming and execution model for supporting programs that use pointer-based dynamic data structures. The major differences between our model and the standard sequential model are that the programmer explicitly chooses a particular strategy to map the dynamic data structures over a distributed heap, and annotates work that can be done in parallel using futures. Remote data access is handled automatically using a combination of software caching and computation migration. We provide a compile-time heuristic that selects between them for each pointer dereference based on programmer hints regarding the data layout. Also, we provide a new local-knowledge coherence mechanism for the cache that outperforms traditional invalidation methods on our benchmarks. The Olden profiler allows the programmer to verify the data layout hints and to determine which operations in the program are expensive. We have implemented a prototype of Olden on the Thinking Machines CM-5. We describe our implementation and report on experiments with eleven benchmarks.

102 citations


Patent
13 Jun 1996
TL;DR: In this article, a system and methods for optimizing the access of information, particularly in response to ad hoc queries or filters, are provided for optimizing data access in ad hoc networks.
Abstract: System and methods are provided for optimizing the access of information, particularly in response to ad hoc queries or filters. The system of the present invention includes a computer having a memory and a processor, a database for storing information in the memory as field values in a record, an indexing component for referencing a plurality of records by key values of the field(s), an input device for selecting desired records by entering a filter (query) condition corresponding to values stored in the field(s), and an optimization module for providing rapid access to the desired records. The optimization module employs one or more existing indices for optimizing data access, including using ones which do not directly support the filter expression. In instances where no indices are available, the optimization module may employ a "learned" optimization method of the invention for on-the-fly learning of records which meet the filter condition.

98 citations


Journal Article
TL;DR: This work proposes several randomized and Huffman-encoding based indexing schemes that are sensitive to data popularity patterns to structure data transmission on the wireless medium, so that the average energy consumption of mobile units and data access time are minimized while trying to access desired data.
Abstract: We consider the application of high volume information dissemination in broadcast based mobile environments. Since current mobile units accessing broadcast information have limited battery capacity, the problemof quick and energy-efficient access to data becomes particularly relevant as the number and sizes of information units increases. We propose several randomized and Huffman-encoding based indexing schemes that are sensitive to data popularity patterns to structure data transmission on the wireless medium, so that the average energy consumption of mobile units and data access time are minimized while trying to access desired data. We then propose an algorithm for PCS units to tune into desired data independent of the actual transmission scheme being used. We also empirically study the proposed schemes and propose different transmission modes for the base station to dynamically adapt to changes in the number of data files to be broadcasted, the available bandwidth and the accuracy of data popularity patterns.

96 citations


Patent
17 Apr 1996
TL;DR: In this paper, the authors present a system for accessing and analyzing data through a central processing unit, which includes a non-modal user interface to provide a user access to the system.
Abstract: A system for accessing and analyzing data through a central processing unit. The system includes a non-modal user interface to provide a user access to the system. A number of application graphics objects allow the user to visually interact with a plurality of analysis objects through the non-modal user interface. The plurality of application analysis objects allow a user to interactively create an analysis network for analyzing one or more databases. A plurality of application data access objects automatically interprets the analysis network and allows the system to access required databases and to generate structure query language required to access and analyze the databases as defined within the analysis network.

80 citations


Patent
06 Jun 1996
TL;DR: The termination availability database (TADB) of the instant invention performs routing decisions in response to call requests received from data access points (DAPs) as mentioned in this paper, which takes each of the requests, which relate to a special service call of a subscriber, and determines the particular termination of the subscriber to which the call is to be routed.
Abstract: The termination availability database (TADB) of the instant invention performs routing decisions in response to call requests received from data access points (DAPs). The TADB takes each of the requests, which relate to a special service call of a subscriber, and determines the particular termination of the subscriber to which the call is to be routed. To perform its determination of where to route the calls, the TADB takes into consideration data collected from the network and the availability of the different terminations of the subscriber. In addition, allocation algorithms are used.

Patent
03 Dec 1996
TL;DR: In this paper, the authors describe a computer having capabilities for hierarchical storage of data, including an interpreter that maps logical user read and write requests to physical block level read-and-write requests, and a hierarchical performance driver having a disk driver interface for receiving the block-level read and writes requests from the interpreter.
Abstract: A computer having capabilities for hierarchical storage of data, said computer including an interpreter that maps logical user read and write requests to physical block level read and write requests, and a hierarchical performance driver having a disk driver interface for receiving the block level read and write requests from the interpreter, the hierarchical performance driver issuing instructions to read and write data from plural data storage devices in response to block level read and write requests, plural data storage devices having different data access speeds, the hierarchical performance driver monitoring the rates of access of blocks of data stored on the data storage devices and transferring blocks of data accessed infrequently from a faster data storage device to a slower data storage device.

Proceedings ArticleDOI
José A. Blakeley1
01 Jun 1996
TL;DR: An overview of OLE DB, a set of interfaces being developed at Microsoft whose goal is to enable applications to have uniform access to data stored in DBMS and non-DBMS information containers, and its areas of componentization.
Abstract: This paper presents an overview of OLE DB, a set of interfaces being developed at Microsoft whose goal is to enable applications to have uniform access to data stored in DBMS and non-DBMS information containers. Applications will be able to take advantage of the benefits of database technology without having to transfer data from its place of origin to a DBMS. Our approach consists of defining an open, extensible Collection of interfaces that factor and encapsulate orthogonal, reusable portions of DBMS functionality. These interfaces define the boundaries of DBMS components such as record containers, query processors, and transaction coordinators that enable uniform, transactional access to data among such components. The proposed interfaces extend Microsoft's OLE/COM object services framework with database functionality, hence these interfaces are collectively referred to as OLE DB. The OLE DB functional areas include data access and updates (rowsets), query processing, schema information, notifications, transactions, security, and access to remote data. In a sense, OLE DB represents an effort to bring database technology to the masses. This paper presents an overview of the OLE DB approach and its areas of componentization.

01 Jan 1996
TL;DR: In this article, the authors present tools that support coordinated access to data stored in distributed, heterogeneous, autonomous data repositories, such as distributed, distributed, and autonomous data stores.
Abstract: Modern organizations need tools that support coordinated access to data stored in distributed, heterogeneous, autonomous data repositories.Database systems have proven highly successful in managing ...

Patent
19 Jul 1996
TL;DR: In this article, the use of access authority information units to gain access to the positive identification system solves the problems of open, unsecured and unauditable access to data for use in point of use identification systems.
Abstract: The present invention is a system and method of providing system integrity and positive audit capabilities to a positive identification system. The use of access authority information units to gain access to the positive identification system solves the problems of open, unsecured and unauditable access to data for use in point of use identification systems. In order to secure the rights to the data that is needed to make mass identification systems operate, it must be shown that records will be closed and secure, as well as that there will be an audit trail of access that is made to the data. This system solves those problems through the use of a system and method for identification with biometric data and/or personal identification numbers and/or personalized devices embedded with codes unique to their assigned users.

Patent
20 Mar 1996
TL;DR: In this article, a method and associated data structures for generating, maintaining, and applying "display parameters" as a portion of lookup information associated with a column in a database table are presented.
Abstract: A method and associated data structures for generating, maintaining, and applying "display parameters" as a portion of lookup information associated with a column in a database table. Display parameters provides for the display and retrieval of alternate values in place of the data stored in the associated column. Use of such alternate values permit the application database designer to provide more meaningful presentation of data to a user while permitting the storage of corresponding data to be optimized for storage or performance criteria. The display parameters are a portion of the lookup information associated with a column of a database table and are stored in the data dictionary of the application database. Storing the display parameters in the data dictionary permits all access to the associated column data to automatically utilize the alternate values for any data display or retrieval without further design considerations unique to the particular database access. The display parameters may comprise a static table consisting of rows (or a list) of entries. Each includes at least one alternate value for the stored value. A plurality of columns may be provided in each row to provide other alternate values useful by other column data access operations. The display parameters may alternatively comprise a second table identification and an associated query command. The second table is joined with the first table (containing the lookup associated column) by a relation. The associated query command is applied to the join to extract a table of alternate values used in accessing the associated column.

Proceedings Article
29 May 1996
TL;DR: The critical functionality of the basic planner is identified, how the information gathering problem can be cast as a planning problem is described, and the approach to efficiently generating high-quality plans in this application domain is presented.
Abstract: Information gathering requires locating and integrating data from a set of distributed information sources. These sources may contain overlapping data and can come from different types of sources, including traditional databases, knowledge bases, programs, and Web pages. In this paper we focus on the problem of how to apply a general-purpose planner to produce plans for information gathering. We identify the critical functionality of the basic planner, describe how the information gathering problem can be cast as a planning problem, and present our approach to efficiently generating high-quality plans in this application domain. The resulting information gathering planner is used as the query processor in the SIMS information mediator, which is being applied to provide access to data for transportation logistics and trauma care. We present empirical results in the transportation domain to demonstrate that this planner can efficiently produce information gathering plans on a set of example queries that were provided with the databases.

Proceedings ArticleDOI
TL;DR: The Georgetown University Medical Center Department of Radiology is currently involved in integrating three diverse networks into a coherent whole, and a solution for providing adequate access for all users, assuring confidentiality for patient data, and managing network traffic is described.
Abstract: At the Georgetown University Medical Center Department of Radiology we are currently involved in integrating three diverse networks into a coherent whole. We are installing a new Radiology Information System (RIS) and a new Picture Archiving and Communication System (PACS) as well as upgrading our existing research network, which provides Internet access, to add office automation tools. To accomplish this, many issues have to be resolved. Users of the different systems have different requirements and must have different levels of access to data on the various systems. For example, researchers need access to Internet resources and e-mail while data from the clinical systems must be protected from the outside world. Physicians and some other users on the non-clinical network also require fast and convenient access to the RIS and PACS for clinical uses. Parts of the network should be shielded from the heavy image traffic created by the PACS. On the other hand, because clinical data is also used for research, a connection between the networks is necessary. Our solution for providing adequate access for all users, assuring confidentiality for patient data, and managing network traffic will be described.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
05 Jun 1996
TL;DR: In this paper, a method and apparatus are provided for adaptive localization of frequently accessed, randomly addressed data in a direct access storage device (DASD) to achieve improved system access performance.
Abstract: A method and apparatus are provided for adaptive localization of frequently accessed, randomly addressed data in a direct access storage device (DASD) to achieve improved system access performance. At selected sampling intervals, a DASD storage controller analyzes data access patterns based on frequency of access, identifies a remapping algorithm to remap the logical groups to physical groups, and moves the physical groups according to the identified remapping algorithm. The data reordering on the DASD provides frequently accessed data in close proximity so that seek time is minimized. The adaptive data localization method periodically performed by the storage controller is transparent to the host file system. The reordering of the data on the DASD is performed during periods of low system data transfer activity.

Journal ArticleDOI
TL;DR: The IBMLZ1 compression algorithm was designed not only for robust and highly efficient compression, but also for extremely high reliability, because compression removes redundancy in the source, the compressed data become extremely vulnerable to data corruption.
Abstract: Data compression allows more efficient use of storage media and communication bandwidth, and standard compression offerings for tape storage have been well established since the late 1980s. Compression technology lowers the cost of storage without changing applications or data access methods. The desire to extend these cost/performance benefits to higher-data-rate media and broader media forms, such as DASD storage subsystems, motivated the design and development of the IBMLZ1 compression algorithm and its implementing technology. The IBMLZ1 compression algorithm was designed not only for robust and highly efficient compression, but also for extremely high reliability. Because compression removes redundancy in the source, the compressed data become extremely vulnerable to data corruption. Key design objectives for the IBMLZ1 development team were efficient hardware execution, efficient use of silicon technology, and minimum system-integration overhead. Through new observations of pattern matching, match-length distribution, and the use of graph vertex coloring for evaluating data flows, the IBMLZ1 compression algorithm and the chip family achieved the above objectives.

Proceedings Article
03 Sep 1996
TL;DR: This paper presents a dynamic reordering strategy that can be exploited to match execution order to the optimal data fetch order, in all parts of the plan-tree, and reports on a prototype implementation based on Postgres.
Abstract: In the relational model the order of fetching data does not affect query correctness. This flexibility is exploited in query optimization by statically reordering data accesses. However, once a query is optimized, it is executed in a fixed order in most systems, with the result that data requests are made in a fixed order. Only limited forms of runtime reordering can be provided by low-level device managers. More aggressive reordering strategies are essential in scenarios where the latency of access to data objects varies widely and dynamically, as in tertiary devices. This paper presents such a strategy. Our key innovation is to exploit dynamic reordering to match execution order to the optimal data fetch order, in all parts of the plan-tree. To demonstrate the practicality of our approach and the impact of our optimizations, we report on a prototype implementation based on Postgres. Using our system, typical I/O cost for queries on tertiary memory databases is as much as an order of magnitude smaller than with conventional query processing techniques.

Patent
Geoffrey Sharman1
23 Jan 1996
TL;DR: In this article, a distributed data processing system with replication of data across the system is presented, in which a currency period is associated with each copy of a data object and the data object is assumed to be valid during the currency period.
Abstract: Distributed data processing systems with replication of data across the system are required to provide data access and availability performance approaching that of a stand-alone system with its own data, but reductions in network traffic gained by replication must be balanced against the additional network traffic required to update multiple copies of the data. Apparatus and a method of operating a distributed data processing system is provided in which a currency period is associated with each copy of a data object and the data object is assumed to be valid during the currency period. The apparatus has means for checking (500) whether the currency period has expired and means for updating (570, 580) the copies. A validity indicator is set on determination of expiry of the currency period and when updates are applied to the primary copy, and is checked to determine validity of a copy.

Patent
19 Nov 1996
TL;DR: In this article, the parent information of data is added to the data itself and to a storage location information of the data in the database, and lock control is performed by use of the parent-child relationship.
Abstract: In a database management system for concurrently executing a plurality of transactions for access to a database, there is provided a user interface for defining a parent/child relationship between a plurality of pieces of data. The parent information of data is added to the data itself and to a storage location information of the data in the database. Upon access to data, lock control is performed by use of the parent information of the data, thereby performing representation of locked a object.

Patent
30 Dec 1996
TL;DR: In this article, the authors present a real-time device data management (RTNDD) system for managing access to data describing devices in a telecommunications network. But their system is limited to the use of a single device at a time.
Abstract: A real-time device data management (RTNDD) system for managing access to data describing devices in a telecommunications network. The RTNDD system maintains a partition data structure for each device. The partition data structure has a header section and a port data section. The header section contains data elements describing the device (e.g., number of ports and device type). The port data section has a port data structure for each port of the device that contains data elements describing the port (e.g., current cross-connect and actual port address for device). The RTNDD system also provides a standard interface through which external systems access the device data. The standard interface has functions for reading and writing to the partition data structures. Each read function reads multiple data elements of a header section or a port data structure at a time, and each write function writes a single data element of a device at a time. The standard interface is device independent such that an external system can use the standard interface to access device data for any type of device whose data is stored in a partition data structure.

Journal ArticleDOI
TL;DR: This paper defines Structured Maps and presents several examples adapted from the Sequent Corporate Electronic Library (SCEL), an intranet resource currently implemented in HTML.
Abstract: The overwhelming accessibility to data, on a global scale, does not necessarily translate to widespread utility of data. We often find that we are drowning in data, with few tools to help manage relevant data for our various activities. This paper presents Structured Maps, an additional modeling construct at a level above available information sources, to provide structured and managed access to data. Structured Maps are based on Topiic Navigation Maps, defined by the SGML community to profice multi-document indices and glossaries. A Structured Map is a modeliing construct that provides a layer of typed entities and relationshiips where the entitiies can have typed references to information elements in the Information Universe. The type structure introduces semantics so that we know what sort of entities are being tracked and why various references have been made. Structured Maps can be placed over loosely structured data, e.g., document collections, with references at various levels of granularity. Structured Maps directly support new, customized, and even personaliized use of the iinformation. In this paper, we define Structured Maps and present several examples adapted from the Sequent Corporate Electronic Library (SCEL), an intranet resource currently implemented in HTML.

Journal ArticleDOI
TL;DR: An effective database system for diabetes care will include: user-friendliness, rapid but secure access to data, a facility for multiple selections for analysis and audit, the ability to be used as part of the patient consultation process, the able to interface or integrate with other applications, and cost efficiency.
Abstract: The St Vincent Declaration includes a commitment to continuous quality improvement in diabetes care. This necessitates the collection of appropriate information to ensure that diabetes services are efficient, effective and equitable. The quantity of information, and the need for rapid access, means that this must be computer-based. The choice of architecture and the design of a database for diabetes care must take into account available equipment and operational requirements. Hardware topology may be determined by the operating system and/or netware software. An effective database system will include: user-friendliness, rapid but secure access to data, a facility for multiple selections for analysis and audit, the ability to be used as part of the patient consultation process, the ability to interface or integrate with other applications, and cost efficiency. An example of a clinical information database for diabetes care, Diamond, is described.

Proceedings Article
02 Aug 1996
TL;DR: This paper discusses how database query processing and distributed object management techniques can be used to facilitate geoscientific data mining and analysis.
Abstract: Geoscience studies produce data from various observations, experiments, and simulations at an enormous rate. Exploratory data mining extracts "content information" from massive geoscientific datasets to extract knowledge and provide a compact summary of the dataset. In this paper, we discuss how database query processing and distributed object management techniques can be used to facilitate geoscientific data mining and analysis. Some special requirements of large scale geoscientific data mining that are addressed include geoscientific data modeling, parallel query processing, and heterogeneous distributed data access.

Patent
Masaki Suzuki1
15 Mar 1996
TL;DR: In this paper, the authors present a multimedia server capable of reducing a number of accesses to memory devices with low access speed, where a desired multimedia data requested by one terminal connected with one input/output unit is placed in a buffer of another input/Output unit.
Abstract: A multimedia server capable of reducing a number of accesses to memory devices with low access speed. Whether a desired multimedia data requested by one terminal connected with one input/output unit is present in a buffer of another input/output unit is judged, and the desired multimedia data is transferred to a buffer of that one input/output unit, from the multimedia data storage device when the desired multimedia data is judged as not present, or from a buffer of that another input/output unit when the desired multimedia data is judged as present. It is also possible to predict future necessary multimedia data at each input/output unit, monitor the multimedia data transferred through the network, and pre-fetch the future necessary multimedia data from the monitored multimedia data. It is also possible to combine a plurality of requests for an identical multimedia data issued by different terminals connected with different input/output units within a prescribed period of time into a single unified request, so that the identical multimedia data can be obtained by the single access and transferred to the different input/output units simultaneously.

Proceedings ArticleDOI
17 Nov 1996
TL;DR: Measurements of three iterative applications show that a predictive protocol increases the number of shared-data requests satisfied locally, thus reducing the remote data access latency and total execution time.
Abstract: Many scientific applications are iterative and specify repetitive communication patterns. This paper shows how a parallel-language compiler and a predictive cache-coherence protocol in a distributed shared memory system together can implement shared-memory communication efficiently for applications with unpredictable but repetitive communication patterns. The compiler uses static analysis to identify program points where potentially repetitive communication occurs. At runtime, the protocol builds a communication schedule in one iteration and uses the schedule to pre-send data in subsequent iterations. This paper contains measurements of three iterative applications (including adaptive programs with unstructured data accesses) that show that a predictive protocol increases the number of shared-data requests satisfied locally, thus reducing the remote data access latency and total execution time.

Proceedings ArticleDOI
01 Jan 1996
TL;DR: A run-time system called RAPID is described that provides a set of library functions for specifying irregular data objects and tasks that access these objects, and the system extracts a task dependence graph from data access patterns, and executes tasks efficiently on a distributed memory machine.
Abstract: Run-time compilation techniques have been shown effective for automating the parallelization of loops with unstructured indirect data accessing patterns. However, it is still an open problem to efficiently parallelize sparse matrix factorizations commonly used in iterative numerical problems. The difficulty is that a factorization process contains irregularly-interleaved communication and computation with varying granularities and it is hard to obtain scalable performance on distributed memory machines. In this paper, we present an inspector/executor approach for parallelizing such applications by embodying automatic graph scheduling techniques to optimize interleaved communication and computation. We describe a run-time system called RAPID that provides a set of library functions for specifying irregular data objects and tasks that access these objects. The system extracts a task dependence graph from data access patterns, and executes tasks efficiently on a distributed memory machine. We discuss a set of optimization strategies used in this system and demonstrate the application of this system in parallelizing sparse Cholesky and LU factorizations.