scispace - formally typeset
Search or ask a question

Showing papers on "Data access published in 1999"


Patent
Michel K. Bowman-Amuah1
31 Aug 1999
TL;DR: In this article, a system, method and article of manufacture are provided for separating logic and data access concerns during development of a persistent object for insulating development of business logic from development of data access routine.
Abstract: A system, method and article of manufacture are provided for separating logic and data access concerns during development of a persistent object for insulating development of business logic from development of data access routine. A persistent object being developed is accessed and a state of the persistent object is detached into a separate state class. The state class serves as a contract between a logic development team and a data access development team. Logic development is limited by the logic development team to developing business logic. Data access development is restricted by the data access development team to providing data creation, retrieval, updating, and deletion capabilities.

513 citations


Journal ArticleDOI
TL;DR: It is shown that queries formulated on shared views, export schema, and shared “ontologies” can be mediated in the same way using the Context Interchange framework.
Abstract: The Context Interchange strategy presents a novel perspective for mediated data access in which semantic conflicts among heterogeneous systems are not identified a priori, but are detected and reconciled by a context mediator through comparison of contexts axioms corresponding to the systems engaged in data exchange. In this article, we show that queries formulated on shared views, export schema, and shared “ontologies” can be mediated in the same way using the Context Interchange framework. The proposed framework provides a logic-based object-oriented formalsim for representing and reasoning about data semantics in disparate systems, and has been validated in a prototype implementation providing mediated data access to both traditional and web-based information sources.

419 citations


Patent
Michel K. Bowman-Amuah1
31 Aug 1999
TL;DR: In this paper, a system, method, and article of manufacture are provided for implementing information services patterns associated with a relational database management system in an object-oriented persistence architecture, where data is controlled in the database utilizing a data retrieval mechanism encapsulated in a data handler.
Abstract: A system, method, and article of manufacture are provided for implementing information services patterns associated with a relational database management system in an object-oriented persistence architecture. An object attribute is translated to and from a database value utilizing a database. Data is controlled in the database utilizing a data retrieval mechanism encapsulated in a data handler. Data access to the database is then organized among a plurality of business entities utilizing a plurality of business objects. Multiple business objects are retrieved across a network in one access operation. Retrieved data is mapped into objects. Then a warning is provided upon retrieval of objects missing at least one attribute. The warning includes information on how to handle the at least one missing attribute.

393 citations


Patent
03 Mar 1999
TL;DR: In this paper, the authors propose a method for partially synchronizing a local database stored on a local computer and a remote database stored in a remote computer. But the method requires the local database to be updated at the same time as the remote database.
Abstract: A method for partially synchronizing a local database stored on a local computer and a remote database stored on a remote computer. The method includes forming a message including information related to a local update of the local database, selecting a path from one or more communication paths coupling the local computer to the remote computer to pass the message to the remote computer, and transmitting data including the message to the remote computer over the selected path. The method can include determining whether the local update to the local database should be sent to the remote computer. The method includes receiving the data at the remote computer, processing the message included in the received data, and providing the information related to the local update to a remote application executing on the remote computer. A remote database coupled to the remote application is then updated using the information related to the local update. Information related to an update of the remote can also be selectively sent to update the local database. Messages sent between a local computer and a remote computer can be passed through a networked server computer, coupled by wired or wireless data networks to both the local computer and the remote computer.

275 citations


Proceedings ArticleDOI
01 May 1999
TL;DR: A data movement and access service called Global Access to Secondary Storage (GASS) is proposed, which defines a global name space via Uniform Resource Locators and allows applications to access remote files via standard I/O interfaces.
Abstract: In wide area computing, programs frequently execute at sites that are distant from their data. Data access mechanisms are required that place limited functionality demands on an application or host system yet permit high-performance implementations. To address these requirements, we propose a data movement and access service called Global Access to Secondary Storage (GASS). This service defines a global name space via Uniform Resource Locators and allows applications to access remote files via standard I/O interfaces. High performance is achieved by incorporating default data movement strategies that are specialized for I/O patterns common in wide area applications and by providing support for programmer management of data movement. GASS forms part of the Globus toolkit, a set of services for high-performance distributed computing. GASS itself makes use of Globus services for security and communication, and other Globus components use GASS services for executable staging and real-time remote monitoring. Application experiences demonstrate that the library has practical utility.

246 citations


Patent
13 Jan 1999
TL;DR: In this article, a system for managing sensitive data is described, which prevents a system administrator from accessing sensitive data by storing data and identifier information on different computer systems, and each query is encrypted using two codes, the first code readable only by an identifier database and the second codereadable only by a data access database.
Abstract: A system for managing sensitive data is described. The system prevents a system administrator from accessing sensitive data by storing data and identifier information on different computer systems. Each query is encrypted using two codes, the first code readable only by an identifier database and a second code readable only by a data access database. By routing the data path from a source terminal to the identifier database which substitutes an internal ID, then to the data access database and back to the source terminal, data security is significantly improved.

237 citations


Proceedings ArticleDOI
Chen Ding1, Ken Kennedy1
01 May 1999
TL;DR: It is demonstrated that run-time program transformations can substantially improve computation and data locality and, despite the complexity and cost involved, a compiler can automate such transformations, eliminating much of the associated run- time overhead.
Abstract: With the rapid improvement of processor speed, performance of the memory hierarchy has become the principal bottleneck for most applications. A number of compiler transformations have been developed to improve data reuse in cache and registers, thus reducing the total number of direct memory accesses in a program. Until now, however, most data reuse transformations have been static---applied only at compile time. As a result, these transformations cannot be used to optimize irregular and dynamic applications, in which the data layout and data access patterns remain unknown until run time and may even change during the computation.In this paper, we explore ways to achieve better data reuse in irregular and dynamic applications by building on the inspector-executor method used by Saltz for run-time parallelization. In particular, we present and evaluate a dynamic approach for improving both computation and data locality in irregular programs. Our results demonstrate that run-time program transformations can substantially improve computation and data locality and, despite the complexity and cost involved, a compiler can automate such transformations, eliminating much of the associated run-time overhead.

232 citations


Patent
13 Jan 1999
TL;DR: In this article, the authors propose a technique with which files encoded according to the Extensible Markup Language (XML) notation can be marked up to indicate that the content of the file (or some portion thereof) is dynamic in nature and is to be updated automatically to reflect changing information.
Abstract: A method, system, and computer-readable code for a technique with which files encoded according to the Extensible Markup Language (XML) notation can be marked up to indicate that the content of the file (or some portion thereof) is dynamic in nature and is to be updated automatically to reflect changing information. The proposed technique provides a novel way to specify that a data repository should be accessed as the source of the updates. Techniques are defined for specifying that this data repository access occurs once, and for specifying that it occurs when a set of conditions are satisfied (which may include periodically repeating the data repository access and content update). In one aspect, the data repository is a database; in another aspect, the data repository is a file system. Preferably, the Lightweight Directory Access Protocol (LDAP) is used as an access method when the data repository being accessed is a database storing an LDAP directory.

222 citations


Patent
06 Dec 1999
TL;DR: In this article, a storage network that facilitates the transfer of a data set from a first storage device to a second storage device, without blocking access to the data set during the transfer is provided.
Abstract: A storage network that facilitates the transfer of a data set from a first storage device to a second storage device, without blocking access to the data set during the transfer is provided. An intermediate device for the storage network is provided. The intermediate device comprises a plurality of communication interfaces. Data transfer resources are coupled to the plurality of interfaces, and are used for transferring data access requests identifying the data set from a third interface at least one of a first and second interfaces. A logic engine within the intermediate device is coupled to the interfaces, and is responsive to a control signal to manage the transfer of the data set through the interfaces.

194 citations


Patent
12 Aug 1999
TL;DR: In this paper, a data access server is interposed between the object-oriented programs and the data sources, and acts as an intermediary between the two, and a consolidated response from the intermediary server contains data items requested by the computer program, information regarding the hierarchical topology that relates the data items, and an indication of the possible object types that might embody the data item.
Abstract: Data moves between multiple, disparate data sources and the object-oriented computer programs that process the data. A data access server is interposed between the object-oriented programs and the data sources, and acts as an intermediary. The intermediary server receives requests for data access from object-oriented computer programs, correlates each request to one or more interactions with one or more data sources, performs each required interaction, consolidates the results of the interactions, and presents a singular response to the requesting computer program. The consolidated response from the intermediary server contains data items requested by the computer program, information regarding the hierarchical topology that relates the data items, and an indication of the possible object types that might embody the data items. The application program receives the consolidated response and builds an object hierarchy to embody the data items and to interface them to the rest of the application program. The class of an object used to embody data items is selected at execution time from a list of possible candidates.

189 citations


Patent
03 Mar 1999
TL;DR: In this paper, a connection-oriented protocol is proposed for the Common Internet File System (CIFS) protocol, where multiple clients share a Transmission Control Protocol (TCP) connection by allocation of virtual channels within the shared TCP connection and multiplexing of data packets of the virtual channels.
Abstract: A first data mover computer services data access requests from a network client, and a second data mover computer is coupled to the first data mover computer for servicing data access requests from the first data mover computer. The first data mover computer uses a connection-oriented protocol to obtain client context information and to respond to a session setup request from the client by authenticating the client. Then the first data mover computer responds to a file system connection request from the client by forwarding the client context information and the file system connection request to the second data mover computer. Then the first data mover computer maintains a connection between the first data mover computer and the second data mover computer when the client accesses the file system and the first data mover computer passes file access requests from the client to the second data mover computer and returns responses to the file access requests from the second data mover computer to the client. In a preferred embodiment, the connection-oriented protocol is the Common Internet File System (CIFS) Protocol, and multiple clients share a Transmission Control Protocol (TCP) connection between the first data mover computer and the second data mover computer by allocation of virtual channels within the shared TCP connection and multiplexing of data packets of the virtual channels over the shared TCP connection.

Posted Content
TL;DR: The next-generation astronomy digital archives will cover most of the sky at fine resolution in many wavelengths, from X-rays, through ultraviolet, optical, and infrared, and the archives will be stored at diverse geographical locations.
Abstract: The next-generation astronomy digital archives will cover most of the universe at fine resolution in many wave-lengths, from X-rays to ultraviolet, optical, and infrared. The archives will be stored at diverse geographical locations. One of the first of these projects, the Sloan Digital Sky Survey (SDSS) will create a 5-wavelength catalog over 10,000 square degrees of the sky (see this http URL). The 200 million objects in the multi-terabyte database will have mostly numerical attributes, defining a space of 100+ dimensions. Points in this space have highly correlated distributions. The archive will enable astronomers to explore the data interactively. Data access will be aided by a multidimensional spatial index and other indices. The data will be partitioned in many ways. Small tag objects consisting of the most popular attributes speed up frequent searches. Splitting the data among multiple servers enables parallel, scalable I/O and applies parallel processing to the data. Hashing techniques allow efficient clustering and pair-wise comparison algorithms that parallelize nicely. Randomly sampled subsets allow debugging otherwise large queries at the desktop. Central servers will operate a data pump that supports sweeping searches that touch most of the data. The anticipated queries require special operators related to angular distances and complex similarity tests of object properties, like shapes, colors, velocity vectors, or temporal behaviors. These issues pose interesting data management challenges.

Patent
07 Jul 1999
TL;DR: In this paper, a uniform distributed data model provides device state information and policy information to be efficiently retrieved from virtually all network devices rather than solely from directory server(s), using a registration and notification system, data elements are associated with a particular owner network device and other network devices requiring access to data elements to derive needed state information for taking network policy actions.
Abstract: Simple and complex policy mechanisms for policy-enabled advantageously comprise a Data Access Client Module (DACM) and Policy Interpreter and Processor (PIP) for establishing data paths between a network device and data stores containing device configuration information, and simple policy definitions, e.g., filter tables, and complex policy expressions. A uniform distributed data model provides device state information and policy information to be efficiently retrieved from virtually all network devices rather than solely from directory server(s). Using a registration and notification system, data elements (e.g., directory subtrees or executable modules) are associated with a particular owner network device and other network devices requiring access to data elements to derive needed state information for taking network policy actions. A data element is provided via messages sent to a target network device upon the occurrence of a relevant event (e.g., exceeding a prescribed bandwidth allocation or congestion level).

Patent
16 Nov 1999
TL;DR: In this paper, an architecture for document archival built on network-centric groupware such as Internet standards-based messaging is presented, which is accomplished in a manner similar to sending email to recipients, retrieving messages from folders, and classifying messages into folder hierarchies.
Abstract: The present invention discloses an architecture for document archival built on network-centric groupware such as Internet standards-based messaging. Archiving and retrieving and classifying documents into meaningful collections is accomplished in a manner similar to sending email to recipients, retrieving messages from folders, and classifying messages into folder hierarchies. In the simplest scenario, if saveme.com is the archiving server's name, then sending an email to abc@saveme.com will cause the contents of the email message to be archived in the abc mailbox. The archived documents may be automatically stored in jukeboxes of non-tamperable media such as Write Once Read Multiple (WORM) Compact Disks (CD), which provide high storage capacity, low cost compared to magnetic disks, random data access, and long-term stability. The present invention leverages existing messaging infrastructures, and the resulting environment is not intrusive, easier to administer, and easier to deploy than conventional dedicated document archival systems.

Proceedings ArticleDOI
01 Aug 1999
TL;DR: In this article, the authors describe a prototype implementation of Intermemory, including an overall system architecture and implementations of key system components, which is a working Intermemory that tolerates up to 17 simultaneous node failures and includes a Web gateway for browser-based access to data.
Abstract: An Archival Intermemory solves the problem of highly survivable digital data storage in the spirit of the Internet. In this paper we describe a prototype implementation of Intermemory, including an overall system architecture and implementations of key system components. The result is a working Intermemory that tolerates up to 17 simultaneous node failures, and includes a Web gateway for browser-based access to data. Our work demonstrates the basic feasibility of Intermemory and represents signi cant progress towards a deployable system.

Proceedings ArticleDOI
23 Mar 1999
TL;DR: In this article, a simple analysis and comparison of the counting stage for the a priori association rules algorithm is presented, showing that a "column-wise" approach to data access is often more efficient than the standard row-wise approach.
Abstract: Efficient mining of data presents a significant challenge, due to problems of combinatorial explosion in the space and time often required for such processing. While previous work has focused on improving the efficiency of the mining algorithms, we consider how the representation, organization, and access of the data may significantly affect performance, especially when I/O costs are also considered. By a simple analysis and comparison of the counting stage for the a priori association rules algorithm, we show that a "column-wise" approach to data access is often more efficient than the standard row-wise approach. We also provide the results of empirical simulations to validate our analysis. The key idea in our approach is that counting in the a priori algorithm with data accessed in a column-wise manner, significantly reduces the number of disk accesses required to identify itemsets with a minimum support in the database-primarily by reducing the degree to which data and counters need to be repeatedly brought into memory.

Patent
05 Nov 1999
TL;DR: In this paper, an account data access method allowing access to an agency account database, such as that of a collection or other debt recovery agency, from public sites over a network by agency affiliates and clients of the agency is presented.
Abstract: An account data access method allowing access to an agency account database, such as that of a collection or other debt recovery agency, from public sites over a network by agency affiliates and clients of the agency. The invention provides for secure access to a client's accounts using a web browser over the internet. The invention also provides for different levels of access to the accounts among different representatives of the client.

Patent
16 Sep 1999
TL;DR: In this article, a system for data access in a packet-switched network, including a sender/computer including an operating unit, a first memory, a permanent storage memory and a processor and a remote receiver/computer, was presented.
Abstract: The invention provides a system for data access in a packet-switched network, including a sender/computer including an operating unit, a first memory, a permanent storage memory and a processor and a remote receiver/computer including an operating unit, a first memory, a permanent storage memory and a processor, the sender/computer and the receiver/computer communicating through the network; the sender/computer further including device for calculating digital digests on data; the receiver/computer further including a network cache memory and device for calculating digital digests on data in the network cache memory; and the receiver/computer and/or the sender/computer including device for comparison between digital digests. The invention also provides a method and apparatus for increased data access in a packet-switched network.

Patent
19 Nov 1999
TL;DR: In this paper, the authors present a system and method for organizing, storing, retrieving and searching through binary representations of information in many forms and formats, where data is stored in its original file format, while maintaining metadata about the data items in a relational database.
Abstract: The present invention introduces a system and method for organizing, storing, retrieving and searching through binary representations of information in many forms and formats. Data is stored in its original file format, while maintaining metadata about the data items in a relational database. During searches the system utilizes the metadata to invoke data translators of the appropriate type to present data to the search engine itself. In addition, the system utilizes profiles and access control lists to restrict access to data to authorized users.

Journal ArticleDOI
TL;DR: A semantic caching mechanism which allows data to be cached as a collection of possibly related blocks, each of which is the result of a previously evaluated query, is proposed.
Abstract: Caching of remote data in a mobile client's local storage can improve data access performance and data availability. Traditional approaches are page-based, without taking advantage of the semantics of cached data. It is difficult for a client to determine if a query could be answered entirely based on locally cached data, forcing it to contact the database server for additional data. We propose a semantic caching mechanism which allows data to be cached as a collection of possibly related blocks, each of which is the result of a previously evaluated query. We investigate mechanisms for transforming projection-selection queries to reuse cached data blocks. This avoids transmitting unwanted data items over low bandwidth wireless channels. Cache replacement techniques based on the semantics of cached data are also proposed. We describe the design of our prototype and study its performance.

Patent
29 Apr 1999
TL;DR: In this paper, the authors propose a caching mechanism for a directory service having a backing store, where directory search results are cached over a given data capture period, with the information then being used by a data analysis routine to generate a data access history for the user for a particular application.
Abstract: A caching mechanism for a directory service having a backing store. According to the invention, directory search results are cached over a given data capture period, with the information then being used by a data analysis routine to generate a data access history for the user for a particular application. That history is then used to generate a recommended pre-fetch time, a filter key for the pre-fetch, and a preferred cache replacement policy (e.g., static or LRU). Based on that information, a control routine pre-fetches and populates the cache with information that is expected to be needed by the user as a result of that access history.

Patent
James B. Lim1
18 Jun 1999
TL;DR: In this paper, the authors propose a technique to provide access to data storage pathways that connect a cluster of nodes to a data storage system in a manner that enables a failover operation to occur from a degraded first node to a second node when the first node suffers pathway degradation.
Abstract: A technique provides access to data storage pathways that connect a cluster of nodes to a data storage system in a manner that enables a failover operation to occur from a first node to a second node when the first node suffers pathway degradation forcing the first node to operate significantly slower than previously, even when the first node retains access to the data storage system through one or more available data storage pathways. Such a failover operation from the degraded first node to a second node allows the cluster as a whole to continue performing operations at a rate that is superior to that provided by the degraded first node. In one arrangement, a cluster of nodes connects to the data storage system through multiple sets of data storage pathways. A cluster framework and a set of pathway resource agents operate on the cluster of nodes. In particular, a respective portion of the cluster framework and a respective pathway resource agent operate on each node. The pathway resource agents receive, from the cluster framework, instructions for controlling the pathway sets and, in response, determine which of the pathway sets are available for transferring data between the cluster of nodes and the data storage system in accordance with predetermined access conditions. The pathway resource agents then provide, to the cluster framework, operation states identifying which of the pathway sets are available for transferring data between the cluster of nodes and the data storage system in accordance with the predetermined access conditions. The cluster framework can then access the pathway sets based on the operation states.

Journal ArticleDOI
TL;DR: A set of cost measures that can be applied to parallel algorithms to predict their computation, data access and communication performance make it possible to compare different parallel implementation strategies for data mining techniques without benchmarking each one.
Abstract: This article presents a set of cost measures that can be applied to parallel algorithms to predict their computation, data access and communication performance. These measures make it possible to compare different parallel implementation strategies for data mining techniques without benchmarking each one.

Journal ArticleDOI
TL;DR: This article considers evolutionary changes to IS-136 TDMA to enable it to provide a variety of PCS concepts and proposes options that would provide high-quality voice service for indoor and pedestrian systems such as cellular office systems and personal base stations.
Abstract: A number of factors have motivated most spectrum owners to make plans to deploy upbanded second-generation cellular technologies for use in the PCS bands in the United States. However, the second-generation cellular technologies will need to be enhanced to provide third-generation services. There is interest in using the PCS and cellular spectra to broaden the market of users and the range of use of wireless beyond where it stands today, and beyond the primary applications that drove the development of the technologies being deployed. There is also interest in new technologies to improve the performance of cellular and PCS services, reduce the cost, and improve availability. This article considers evolutionary changes to IS-136 TDMA to enable it to provide a variety of PCS concepts. These evolutionary changes are presented in the form of options that would (1) provide high-quality voice service for indoor and pedestrian systems such as cellular office systems and personal base stations; (2) support enhanced-bit-rate packet wireless data access to the Internet as well as circuit data access; (3) provide smart antenna technology to improve coverage, quality, and capacity; (4) automatically assign frequencies for operation and provide for dynamic channel reconfiguration; (5) support microcellular arrangements to provide low-cost and high-capacity service in dense areas; and (6) support a future high-speed packet data access mode through a wideband system that is complementary to IS-136 TDMA and supports single-terminal operation.

Journal ArticleDOI
TL;DR: This article addresses inferential disclosure of confidential views in multidimensional categorical databases and demonstrates that any structural, so data-value-independent method for detecting disclosure can fail.
Abstract: As databases grow more prevalent and comprehensive, database administrators seek to limit disclosure of confidential information while still providing access to data. Practical databases accommodate users with heterogeneous needs for access. Each class of data user is accorded access to only certain views. Other views are considered confidential, and hence to be protected. Using illustrations from health care and education, this article addresses inferential disclosure of confidential views in multidimensional categorical databases. It demonstrates that any structural, so data-value-independent method for detecting disclosure can fail. Consistent with previous work for two-way tables, it presents a data-value-dependent method to obtain tight lower and upper bounds for confidential data values. For two-dimensional projections of categorical databases, it exploits the network structure of a linear programming (LP) formulation to develop two transportation flow algorithms that are both computationally efficient and insightful. These algorithms can be easily implemented through two new matrix operators, cell-maxima and cell-minima. Collectively, this method is called matrix comparative assignment (MCA). Finally, it extends both the LP and MCA approaches to inferential disclosure when accessible views have been masked.

Patent
22 Dec 1999
TL;DR: In this article, the authors propose a session-specific access control for data held at a local data processing apparatus to determine the security attributes of the data (for example, retrieving queue attributes from a database).
Abstract: Methods and data processing apparatuses are provided which enable controlling, from one data processing apparatus, access to data held (for example on a queue) at another data processing apparatus. When a requester wishes to access data held at a local data processing apparatus, a request must be sent to a remote data processing apparatus to determine the security attributes of the data (for example, retrieving queue attributes from a database). The requestor cannot access the data until the security attributes are fully determined at the local data processing apparatus, and since communication with a remote system is required to make this determination the remote apparatus is able to log the requests for data access. The security attributes are preferably an identifier of a cryptor used in compression, a compressor used in compression and an authenticator for authenticating the requestor. The determination of security attributes is preferably required to be repeated for each requester session, with the attributes being deleted from the local data processing apparatus at the end of a session and the requestor being unable to view or save the attributes. This enables session-specific access control.

Proceedings ArticleDOI
01 Aug 1999
TL;DR: An important feature of the proposed dynamic data delivery model is that data are disseminated through various storage mediums according to the dynamically collected data access patterns.
Abstract: Various techniques have been developed to improve the performance of wireless information services. Techniques such as information broadcasting, caching of frequently accessed data, and point-to-point channels for pull-based data requests are often used to reduce data access time. To efficiently utilize information broadcast, indexing and scheduling schemes are employed for the organization of data broadcast. Most of the studies in the literature focused either on individual technique or a combination of them with some restrictive assumptions. There is no study considering these techniques working together in an integrated manner. In this paper, we propose a dynamic data delivery model for wireless communication environments. An important feature of our model is that data are disseminated through various storage mediums according to the dynamically collected data access patterns. Various results are presented in a set of simulation studies, which give some of the intuitions behind the design of a wireless data delivery system.

Patent
16 Mar 1999
TL;DR: In this article, the authors propose a system and method for using a virtual device established at a computer system to access data as it existed at a selected moment in a mass storage system associated with the computer system.
Abstract: A system and method for using a virtual device established at a computer system to access data as it existed at a selected moment in a mass storage system associated with the computer system, regardless of whether new data has been written to the mass storage system. When an original data block is to be overwritten in the mass storage system with a new data block, the original data block is first preserved in a preservation memory associated with the computer system. The preservation memory thereby preserves the original data block as it existed at the selected moment. A virtual device established at the computer system provides access to data as it existed at the selected moment. This data may include original data blocks preserved in the preservation memory and other original data blocks that remain in the mass storage device, and which have not been overwritten with new data. In order to provide access to the data, the virtual device accesses the preservation memory to obtain those original data blocks that have been preserved therein and also accesses the mass storage device to obtain those original data blocks that remain in the mass storage device.

Proceedings ArticleDOI
Yon Dohn Chung1, Myoung Ho Kim1
19 Apr 1999
TL;DR: A practically usable method is given, named QEM, which constructs the broadcast schedule by expanding each query's data set in greedy way and a measure, named QueryDistance (QD), which represents the degree of coherence for the data set accessed by a query.
Abstract: In mobile distributed systems the data on air can be accessed by a large number of clients. This paper describes the way clients access the wireless broadcast data with short latency. We define and analyze the problem of wireless data scheduling and we propose a measure, named QueryDistance (QD), which represents the degree of coherence for the data set accessed by a query. We give a practically usable method named QEM which constructs the broadcast schedule by expanding each query's data set in greedy way. We also evaluate the performance of our method by experiments.

Patent
01 Dec 1999
TL;DR: In this article, a method and system for implementing various protocols for a Time Division Duplex (TDD) Code Division Multiple Access (CDMA) wireless local loop system that utilizes unique embedded concentrated access and embedded data access in a Wireless Local Loop (WLL) is described.
Abstract: A method and system for implementing various protocols for a Time Division Duplex (TDD) Code Division Multiple Access (CDMA) wireless local loop system that utilizes unique embedded concentrated access and embedded data access in a Wireless Local Loop (WLL) is described. The method and system further provides for dynamic pool sizing of the access channels. The protocols support POTS (Plain Old Telephone), ISDN, and direct data service in a point to multi-point configuration. The protocols are inherently flexible so as to provide Enhanced Bandwidth and quality of service (QOS) via CDMA. Channel concatenation (multi-code modulation) provides a multiplicity of channels. The system utilizes frequency division duplex (FDD) operation so as to double capacity. The system further utilizes scaleable architecture for bandwidth expansion and higher data rate services.