scispace - formally typeset
Search or ask a question

Showing papers on "Data access published in 2004"


Patent
13 Nov 2004
TL;DR: In this article, the file may be stored in at least one remote storage device associated with the at least 1 remote computer, respectively, and versions of the file contained on the one or more remote storage devices are synchronized with that on the local device by transmitting over the network connecting the one and more remote devices with the local devices.
Abstract: Applications, systems and methods for permitting simultaneous use of a file by two or more computers over a network may include storing the file locally in a local storage device associated with a local computer; providing access to the file by at least one remote computer which is connectable to the local computer via the network, and wherein at least one of the computers is connectable to the network through a firewall element. The file may be stored in at least one remote storage device associated with the at least one remote computer, respectively, and versions of the file contained on the one or more remote storage devices are synchronized with that on the local device by transmitting over the network connecting the one or more remote storage devices with the local device, at least one of Delta files and Inverse Delta files between the remote storage devices and the local storage device. A method of remotely observing computer activity on a second computer remote with respect to a first computer is also provided. Further, file sharing systems and methods are provided for sharing files among computers.

846 citations


Proceedings ArticleDOI
21 Jun 2004
TL;DR: The Kepler scientific workflow system provides domain scientists with an easy-to-use yet powerful system for capturing scientific workflows (SWFs), a formalization of the ad-hoc process that a scientist may go through to get from raw data to publishable results.
Abstract: Most scientists conduct analyses and run models in several different software and hardware environments, mentally coordinating the export and import of data from one environment to another. The Kepler scientific workflow system provides domain scientists with an easy-to-use yet powerful system for capturing scientific workflows (SWFs). SWFs are a formalization of the ad-hoc process that a scientist may go through to get from raw data to publishable results. Kepler attempts to streamline the workflow creation and execution process so that scientists can design, execute, monitor, re-run, and communicate analytical procedures repeatedly with minimal effort. Kepler is unique in that it seamlessly combines high-level workflow design with execution and runtime interaction, access to local and remote data, and local and remote service invocation. SWFs are superficially similar to business process workflows but have several challenges not present in the business workflow scenario. For example, they often operate on large, complex and heterogeneous data, can be computationally intensive and produce complex derived data products that may be archived for use in reparameterized runs or other workflows. Moreover, unlike business workflows, SWFs are often dataflow-oriented as witnessed by a number of recent academic systems (e.g., DiscoveryNet, Taverna and Triana) and commercial systems (Scitegic/Pipeline-Pilot, Inforsense). In a sense, SWFs are often closer to signal-processing and data streaming applications than they are to control-oriented business workflow applications.

746 citations


Proceedings ArticleDOI
07 Mar 2004
TL;DR: A hybrid approach (HybridCache) is proposed, which can further improve the performance by taking advantage of CacheData and CachePath while avoiding their weaknesses, and can significantly reduce the query delay and message complexity when compared to other caching schemes.
Abstract: Most researches in ad hoc networks focus on routing, and not much work has been done on data access. A common technique used to improve the performance of data access is caching. Cooperative caching, which allows the sharing and coordination of cached data among multiple nodes, can further explore the potential of the caching techniques. Due to mobility and resource constraints of ad hoc networks, cooperative caching techniques designed for wired network may not be applicable to ad hoc networks. In this paper, we design and evaluate cooperative caching techniques to efficiently support data access in ad hoc networks. We first propose two schemes: cachedata which caches the data, and cachepath which caches the data path. After analyzing the performance of those two schemes, we propose a hybrid approach (hybridcache) which can further improve the performance by taking advantage of cachedata and cachepath while avoiding their weaknesses. Simulation results show that the proposed schemes can significantly reduce the query delay and message complexity when compared to other caching schemes.

327 citations


Journal ArticleDOI
TL;DR: The authors present their research findings, based closely on their report to OECD, on key issues in data access, as well as operating principles and management aspects necessary to successful data access regimes.
Abstract: Access to and sharing of data are essential for the conduct and advancement of science. This article argues that publicly funded research data should be openly available to the maximum extent possible. To seize upon advancements of cyberinfrastructure and the explosion of data in a range of scientific disciplines, this access to and sharing of publicly funded data must be advanced within an international framework, beyond technological solutions. The authors, members of an OECD Follow-up Group, present their research findings, based closely on their report to OECD, on key issues in data access, as well as operating principles and management aspects necessary to successful data access regimes.

274 citations


01 Jan 2004
TL;DR: The Hourglass architecture is presented and the design maintains streaming data flows in the face of disconnection, allows discovery of and access to data from sensors, supports participants of widely varying capabilities, takes advantage of wellprovisioned, well-connected machines, and provides separate efficient communication paths for short-lived control messages and long-lived stream-oriented data.
Abstract: The emergence of computationally-enabled sensors and the applications that use sensor data introduces the need for a software infrastructure designed specifically to enable the rapid development and deployment of applications that draw upon data from multiple, heterogeneous sensor networks. We present the Hourglass infrastructure, which addresses this need. Hourglass is an Internet-based infrastructure for connecting a wide range of sensors, services, and applications in a robust fashion. In Hourglass, a stream of data elements is routed to one or more applications. These data elements are generated from sensors inside of sensor networks whose internals can be entirely hidden from participants in the Hourglass system. The Hourglass infrastructure consists of an overlay network of well-connected dedicated machines that provides service registration, discovery, and routing of data streams from sensors to client applications. In addition, Hourglass supports a set of in-network services such as filtering, aggregation, compression, and buffering stream data between source and destination. Hourglass also allows third party services to be deployed and used in the network. In this paper, we present the Hourglass architecture and describe our test-bed and implementation. We demonstrate how our design maintains streaming data flows in the face of disconnection, allows discovery of and access to data from sensors, supports participants of widely varying capabilities (servers to PDAs), takes advantage of wellprovisioned, well-connected machines, and provides separate efficient communication paths for short-lived control messages and long-lived stream-oriented data.

187 citations


Journal ArticleDOI
19 Mar 2004-Science
TL;DR: In this paper, the authors summarize key findings of an international group that studied these issues on behalf of the OECD, and argue that an international framework of principles and guidelines for data access is needed to better realize this potential.
Abstract: The emergence of an global cyberinfrastructure is rapidly increasing the ability of scientists to produce, manage, and use data, leading to new understanding and modes of scientific inquiry that depend on broader data access. As research becomes increasingly global, data intensive, and multifaceted, it is imperative to address national and international data access and sharing issues systematically in a policy arena that transcends national jurisdictions. The authors of this Policy Forum summarize key findings of an international group that studied these issues on behalf of the OECD, and argue that an international framework of principles and guidelines for data access is needed to better realize this potential. They provide a framework for locating and analyzing where improvements can be made in data access regimes, and highlight several topics that require further examination to better inform future policies.

181 citations


Patent
29 Jul 2004
TL;DR: An industrial automation system comprises a security access device, an industrial automation device, a user interface, and a security interface as discussed by the authors, and the user interface is configured to provide a user with access to data stored inside the industrial automation devices.
Abstract: An industrial automation system comprises a security access device, an industrial automation device, a user interface, and a security interface. The user interface is configured to provide a user with access to data stored inside the industrial automation device. The security interface is configured to receive information from the access device and, based on the information received from the access device, to provide authorization for the user to access the data stored inside the industrial automation device using the user interface.

156 citations


Proceedings ArticleDOI
17 May 2004
TL;DR: TeXQuery as discussed by the authors is a full-text search extension to XQuery that provides a rich set of fully composable fulltext search primitives, such as Boolean connectives, phrase matching, proximity distance, stemming and thesauri.
Abstract: One of the key benefits of XML is its ability to represent a mix of structured and unstructured (text) data. Although current XML query languages such as XPath and XQuery can express rich queries over structured data, they can only express very rudimentary queries over text data. We thus propose TeXQuery, which is a powerful full-text search extension to XQuery. TeXQuery provides a rich set of fully composable full-text search primitives,such as Boolean connectives, phrase matching, proximity distance, stemming and thesauri. TeXQuery also enables users to seamlessly query over both structured and text data by embedding TeXQuery primitives in XQuery, and vice versa. Finally, TeXQuery supports a flexible scoring construct that can be used toscore query results based on full-text predicates. TeXQuery is the precursor ofthe full-text language extensions to XPath 2.0 and XQuery 1.0 currently being developed by the W3C.

143 citations


Journal ArticleDOI
TL;DR: This work defines average/transient deadline miss ratio and new data freshness metrics to let a database administrator specify the desired quality of real-time data services for a specific application and presents a novel QoS management architecture for real- time databases to support the desired QoS even in the presence of unpredictable workloads and access patterns.
Abstract: The demand for real-time data services is increasing in many applications including e-commerce, agile manufacturing, and telecommunications network management. In these applications, it is desirable to execute transactions within their deadlines, i.e., before the real-world status changes, using fresh (temporally consistent) data. However, meeting these fundamental requirements is challenging due to dynamic workloads and data access patterns in these applications. Further, transaction timeliness and data freshness requirements may conflict. We define average/transient deadline miss ratio and new data freshness metrics to let a database administrator specify the desired quality of real-time data services for a specific application. We also present a novel QoS management architecture for real-time databases to support the desired QoS even in the presence of unpredictable workloads and access patterns. To prevent overload and support the desired QoS, the presented architecture applies feedback control, admission control, and flexible freshness management schemes. A simulation study shows that our QoS-aware approach can achieve a near zero miss ratio and perfect freshness, meeting basic requirements for real-time transaction processing. In contrast, baseline approaches fail to support the desired miss ratio and/or freshness in the presence of unpredictable workloads and data access patterns.

142 citations


Journal ArticleDOI
TL;DR: An assessment of the integrated use of ground-based and satellite data for air quality monitoring, including several short case studies, was conducted and identified current U.S. satellites with potential forAir quality applications.
Abstract: In the last 5 yr, the capabilities of earth-observing satellites and the technological tools to share and use satellite data have advanced sufficiently to consider using satellite imagery in conjunction with ground-based data for urban-scale air quality monitoring. Satellite data can add synoptic and geospatial information to ground-based air quality data and modeling. An assessment of the integrated use of ground-based and satellite data for air quality monitoring, including several short case studies, was conducted. Findings identified current U.S. satellites with potential for air quality applications, with others available internationally and several more to be launched within the next 5 yr; several of these sensors are described in this paper as illustrations. However, use of these data for air quality applications has been hindered by historical lack of collaboration between air quality and satellite scientists, difficulty accessing and understanding new data, limited resources and agency priorities to develop new techniques, ill-defined needs, and poor understanding of the potential and limitations of the data. Specialization in organizations and funding sources has limited the resources for cross-disciplinary projects. To successfully use these new data sets requires increased collaboration between organizations, streamlined access to data, and resources for project implementation.

141 citations


Proceedings Article
06 Dec 2004
TL;DR: A new distributed file system called BlueFS is built, which reduces file system energy usage by up to 55% and provides up to 3 times faster access to data replicated on portable storage.
Abstract: A fundamental vision driving pervasive computing research is access to personal and shared data anywhere at anytime. In many ways, this vision is close to being realized. Wireless networks such as 802.11 offer connectivity to small, mobile devices. Portable storage, such as mobile disks and USB keychains, let users carry several gigabytes of data in their pockets. Yet, at least three substantial barriers to pervasive data access remain. First, power-hungry network and storage devices tax the limited battery capacity of mobile computers. Second, the danger of viewing stale data or making inconsistent updates grows as objects are replicated across more computers and portable storage devices. Third, mobile data access performance can suffer due to variable storage access times caused by dynamic power management, mobility, and use of heterogeneous storage devices. To overcome these barriers, we have built a new distributed file system called BlueFS. Compared to the Coda file system, BlueFS reduces file system energy usage by up to 55% and provides up to 3 times faster access to data replicated on portable storage.

Journal ArticleDOI
T. Götz1, O. Suhre1
TL;DR: The Common Analysis System is the subsystem in the Unstructured Information Management Architecture (UIMA) that handles data exchanges between the various UIMA components, such as analysis engines and unstructured information management applications.
Abstract: The Common Analysis System (CAS) is the subsystem in the Unstructured Information Management Architecture (UIMA) that handles data exchanges between the various UIMA components, such as analysis engines and unstructured information management applications. CAS supports data modeling via a type system independent of programming language, provides data access through a powerful indexing mechanism, and provides support for creating annotations on text data. In this paper we cover the CAS design philosophy, discuss the major design decisions, and describe some of the implementation details.

Book ChapterDOI
20 Sep 2004
TL;DR: The service infrastructure provided by OGSA-DAI is presented providing a snapshot of its current state, in an evolutionary process, which is attempting to build infrastructure to allow easy integration and access to distributed data using grids or web services.
Abstract: In today's large collaborative environments, potentially composed of multiple distinct organisations, uniform controlled access to data has become a key requirement if these organisations are to work together as Virtual Organisations. We refer to such an integrated set of data resources as a virtual data warehouse. The Open Grid Services Architecture – Data Access and Integration (OGSA-DAI) project was established to produce a common middleware solution, aligned with the Global Grid Forum's (GGF) OGSA vision [OGSA] to allow uniform access to data resources using a service based architecture. In this paper the service infrastructure provided by OGSA-DAI is presented providing a snapshot of its current state, in an evolutionary process, which is attempting to build infrastructure to allow easy integration and access to distributed data using grids or web services. More information about OGSA-DAI is available from the project web site: www.ogsadai.org.

Patent
13 Jan 2004
TL;DR: In this article, an adaptive load balancer is proposed to adaptively adjust to demands for access to data by replicating and migrating data such as files or service objects as needed between the server computer systems to accommodate data access demands.
Abstract: Methods and apparatus provide an adaptive load balancer that presents a virtual data system to client computer systems. The virtual data system provides access to an aggregated set of data, such as files or web service objects, available from a plurality of server data systems respectively operating within a plurality of server computer systems. The adaptive load balancer receives a client data access transaction from a client computer system that specifies a data access operation to be performed relative to the virtual data system presented to the client computer system. The adaptive load balancer processes the client data access transaction in relation to metadata associated with the virtual data system to provide access to the file or service object within a server computer system, or to access the metadata. The adaptive load balancer can work in conjunction with other adaptive load balancers to dynamically adjust to demands for access to data by replicating and migrating data such as files or service objects as needed between the server computer systems to accommodate data access demands.

Patent
26 Feb 2004
TL;DR: An apparatus, system, and method for data access management on a storage device connected to a storage area network is described in this paper, where the storage server also includes a storage manager that is configured to manage data access by the storage agent to the requested storage device.
Abstract: An apparatus, system, and method are disclosed for data access management on a storage device connected to a storage area network. A client includes network connections to a first and second network, where the second network comprises a storage area network (SAN). The client also includes a storage management client and a storage agent. The storage agent is configured to minimize the amount of metadata processing that occurs on the client by sending the metadata or a copy thereof to a storage server to be stored in a centralized metadata database. The storage server also includes a storage manager that is configured to manage data access by the storage agent to the requested storage device.

Patent
11 Mar 2004
TL;DR: A method for protecting database applications including analyzing the activity on the server, analyzing the response from the server and blocking malicious or unauthorized activity commands are analyzed for suspicious or malicious SQL statements or access to unauthorized data Server responses are monitored for suspicious results likely to have occurred from a successful attack or unauthorized access to data.
Abstract: A method for protecting database applications including analyzing the activity on the server, analyzing the response from the server, and blocking malicious or unauthorized activity Commands are analyzed for suspicious or malicious SQL statements or access to unauthorized data Server responses are monitored for suspicious results likely to have occurred from a successful attack or unauthorized access to data When malicious or unauthorized activity occurs, activity by the source is blocked or an alert is issued

Patent
26 Mar 2004
TL;DR: In this paper, the authors propose a system and method that proxies data access commands across a cluster interconnect between storage appliances in a cluster and improve high availability especially during a loss of connectivity due to non-storage appliance hardware failure.
Abstract: A system and method proxies data access commands across a cluster interconnect between storage appliances in a cluster. Each storage appliance activates two ports for data access, a local port for data access requests directed to clients of the storage appliance and a proxy port for data access requests directed to the partner storage appliance. Clients utilizing multi-pathing software may send data access requests to either the local port of the storage appliance or the proxy port of the storage appliance. The system and method improve high availability especially during a loss of connectivity due to non-storage appliance hardware failure.

Journal ArticleDOI
TL;DR: A compiler-controlled dynamic on-chip scratch-pad memory (SPM) management framework that includes an optimization suite that uses loop and data transformations, an on- chip memory partitioning step, and a code-rewriting phase that collectively transform an input code automatically to take advantage of the on- Chip SPM.
Abstract: Optimizations aimed at improving the efficiency of on-chip memories in embedded systems are extremely important. Using a suitable combination of program transformations and memory design space exploration aimed at enhancing data locality enables significant reductions in effective memory access latencies. While numerous compiler optimizations have been proposed to improve cache performance, there are relatively few techniques that focus on software-managed on-chip memories. It is well-known that software-managed memories are important in real-time embedded environments with hard deadlines as they allow one to accurately predict the amount of time a given code segment will take. In this paper, we propose and evaluate a compiler-controlled dynamic on-chip scratch-pad memory (SPM) management framework. Our framework includes an optimization suite that uses loop and data transformations, an on-chip memory partitioning step, and a code-rewriting phase that collectively transform an input code automatically to take advantage of the on-chip SPM. Compared with previous work, the proposed scheme is dynamic, and allows the contents of the SPM to change during the course of execution, depending on the changes in the data access pattern. Experimental results from our implementation using a source-to-source translator and a generic cost model indicate significant reductions in data transfer activity between the SPM and off-chip memory.

Patent
Paul M. Bird1
30 Apr 2004
TL;DR: Disclosed as mentioned in this paper is a data processing system-implemented method for controlling access to data stored on a database having relational objects for which access restrictions are defined for elements of the relational objects.
Abstract: Disclosed is a data processing system-implemented method, a data processing system and an article of manufacture for controlling access to data stored on a database having relational objects for which access restrictions are defined for elements of the relational objects The data processing system-implemented method includes receiving a user request to access one or more relational objects of the database, identifying any access restrictions defined for the one or more relational objects, determining whether any identified access restrictions are applicable to the user request, determining whether any determined applicable access restrictions are to be enforced for the user request, and allowing access to the one or more relational objects based on the determined enforceable access restrictions.

Patent
22 Jan 2004
TL;DR: In this article, the authors propose a behavior-based access control mechanism for DBs. But, their approach relies on a predefined set of criteria for identifying access attempts to sensitive or prohibited data, and the conventional content-based approach scans or sniffes the transmissions for data items matching the predefined criteria.
Abstract: Typical conventional content based database security scheme mechanisms employ a predefined criteria for identifying access attempts to sensitive or prohibited data. An operator, identifies the criteria indicative of prohibited data, and the conventional content based approach scans or “sniffs” the transmissions for data items matching the predefined criteria. In many environments, however, database usage tends to follow repeated patterns of legitimate usage. Such usage patterns, if tracked, are deterministic of normal, allowable data access attempts. Similarly, deviant data access attempts may be suspect. Recording and tracking patterns of database usage allows learning of an expected baseline of normal DB activity, or application behavior. Identifying baseline divergent access attempts as deviant, unallowed behavior, allows automatic learning and implementation of behavior based access control. In this manner, data access attempts not matching previous behavior patterns are disallowed.

Patent
11 Aug 2004
Abstract: A method of indexing data in a multidimensional database includes creating a multidimensional logical access model, creating a multidimensional data storage model in which data is located in cells that are stored and retrieved in blocks, gathering data access information derived from one or more user queries of the database, and reorganizing one or more selected cells in the multidimensional data storage model based on the data access information to reduce the time taken to access the one or more selected cells in response to a user query of the database. A computerized apparatus in communication with a multidimensional database includes a program to perform the method. A computer readable medium contains instructions to cause a computer to perform the method.

Patent
01 Oct 2004
TL;DR: In this paper, a peer-to-peer (P2P) networking system is described that provides a large, persistent object repository with the ability to easily scale to significant size.
Abstract: A peer-to-peer (P2P) networking system is disclosed that provides a large, persistent object repository with the ability to easily scale to significant size. Data security is provided using a distributed object data access mechanism to grant access to data objects to authorized users. Data objects stored within the object repository are provided a plurality of security options including plain text data, objects, encrypted data objects, and secure, secret sharing data objects. A data object query processing component permits users to locate requested information within the P2P networking system.

Patent
Trishul Chilimbi1
16 Nov 2004
TL;DR: In this paper, a system and method for determining where bottlenecks in a program's data accesses occur and providing information to a software developer as to why the bottlenek occurs and what may be done to correct them.
Abstract: A system and method for determining where bottlenecks in a program's data accesses occur and providing information to a software developer as to why the bottlenecks occur and what may be done to correct them. A stream of data access references is analyzed to determine data access patterns (also called data access sequences). The stream is analyzed to find frequently repeated data access sequences (called hot data streams). Properties of the hot data streams are calculated and upon selection of a hot data stream are displayed in a development tool that associates lines of code with the hot data streams.

Patent
13 Aug 2004
TL;DR: In this paper, a method and system for providing data files to a community of users is described, where each user is associated with one or more of the courses and the content items in the content system are selectable by users for inclusion in the plurality of data files.
Abstract: A method and system are disclosed for providing data files to a community of users. The data files relate to a plurality of courses. Each user is associated with one or more of the courses. The system includes client devices operated by the users and a server system in communication with the client devices over a network. The server system provides to the client devices access to data files relating to courses with which the users are associated. The server system also includes a content system for storing content items from users. The content items in the content system are selectable by users for inclusion in one or more of the plurality of data files.

Patent
29 Jan 2004
TL;DR: In this article, a data storage device has a nonvolatile memory-storing data content, and a control processor for evaluating selected data content of the memory to establish whether there is a match between a characteristic of, or a derivative of, the data content and a reference data content characteristic, or derivative.
Abstract: A data storage device has a non-volatile memory-storing data content, and a control processor for evaluating selected data content of the memory to establish whether there is a match between a characteristic of, or a derivative of, the data content and a reference data content characteristic, or derivative. The processor takes an action in response to the match.

Proceedings ArticleDOI
19 Apr 2004
TL;DR: It is shown that the proposed approach provides high data availability, low bandwidth consumption, increased fault-tolerance and improved scalability of the overall system as compared to standard replica control protocols.
Abstract: In data-intensive distributed systems, replication is the most widely used approach to offer high data availability, low bandwidth consumption, increased fault-tolerance and improved scalability of the overall system. Replication-based systems implement replica control protocols that enforce a specified semantics of accessing the data. Also, the performance depends on a host of factors chief of which is the protocol used to maintain consistency among object replica. In this paper, we propose a new low-cost and high data availability protocol for maintaining replicated data on networked distributed computing systems. We show that the proposed approach provides high data availability, low bandwidth consumption, increased fault-tolerance and improved scalability of the overall system as compared to standard replica control protocols.

Patent
22 Jul 2004
TL;DR: In this article, a method of controlling access to data comprises: a) in a first platform wrapping selected data content and at least one information flow control policy in a software wrapper; b) interrogating a second platform for compliance with a trusted platform specification; c) on successful interrogation of the second platform, sending the wrapped data content to the second platforms; and d) unwrapping the wrapped content within the trusted environment for use.
Abstract: A method of controlling access to data comprises: a) in a first platform wrapping selected data content and at least one information flow control policy in a software wrapper; b) interrogating a second platform for compliance with a trusted platform specification; c) on successful interrogation of the second platform, sending the wrapped data content to the second platform; and d) unwrapping the wrapped data content within the trusted environment of the second platform for use.

Patent
14 Apr 2004
TL;DR: In this article, the effectiveness of online advertising using reach, frequency and effective reach is measured using a system and method that is able to count a user access, even if it is served from a cache.
Abstract: A system and method provide for measuring the effectiveness of online advertising using reach, frequency and effective reach (Fig 1 26). The system is able to count a user access, even it is served from a cache (Fig 1 14A). The system is further able to distinguish between a unique user accessing a web site for the first time, and users making repeated accesses (Fig 1, 10-12). The system further does not require a calculation using data commonly stored in a large data access file log of a server to count users, and preserves user privacy while maintaining a count (Fig 1 26).

Patent
23 Jan 2004
TL;DR: In this paper, a data retrieval system provides data to a user of a client computer connected to multiple data stores and multiple other computers, where a request for data is received at the client computer and intercepted at a reverse proxy caching connection.
Abstract: A data retrieval system provides data to a user of a client computer connected to multiple data stores and multiple other computers. A request for data is received at the client computer. The request is forwarded from the client computer to a server computer and intercepted at a reverse proxy caching connection. An attempt is made to locate the data at a data store at the reverse proxy caching connection. If the data is not found the request is forwarded to the server computer. In order to provide data to a user. A user interface is provided. Initially, data elements associated with a grouping of data elements are identified. Then, a subset of the selected data elements are selected based on weights associated with the data elements, without selecting more than a specified number of data elements that are associated with a same sub-category.

Patent
Daniel M. Dias1, Rajat Mukherjee1
16 Nov 2004
TL;DR: In this article, a clustered computer system includes a shared data storage system, preferably a virtual shared disk (VSD) memory system, to which the computers in the cluster write data and read data, using data access requests.
Abstract: A clustered computer system includes a shared data storage system, preferably a virtual shared disk (VSD) memory system, to which the computers in the cluster write data and from which the computers read data, using data access requests. The data access requests can be associated with deadlines, and individual storage devices in the shared storage system satisfy competing requests based on the deadlines of the requests. The deadlines can be updated and requests can be killed, to facilitate real time data access for, e.g., multimedia applications such as video on demand.