scispace - formally typeset
Search or ask a question

Showing papers on "Distributed database published in 2007"


01 Jan 2007
TL;DR: This study describes meta-learning and presents the JAM system (Java Agents for Meta-learning), an agent-based meta- learning system for large-scale data mining applications and identifies and addresses several important desiderata for distributed data mining systems that stem from their additional complexity compared to centralized or host-based systems.
Abstract: Data mining systems aim to discover patterns and extract useful information from facts recorded in databases. A widely adopted approach to this objective is to apply various machine learning algorithms to compute descriptive models of the available data. Here, we explore one of the main challenges in this research area, the development of techniques that scale up to large and possibly physically distributed databases. Meta-learning is a technique that seeks to compute higher-level classifiers (or classification models), called meta-classifiers, that integrate in some principled fashion multiple classifiers computed separately over different databases. This study, describes meta-learning and presents the JAM system (Java Agents for Meta-learning), an agent-based meta-learning system for large-scale data mining applications. Specifically, it identifies and addresses several important desiderata for distributed data mining systems that stem from their additional complexity compared to centralized or host-based systems. Distributed systems may need to deal with heterogenous platforms, with multiple databases and (possibly) different schemas, with the design and implementation of scalable and effective protocols for communicating among the data sites, and the selective and efficient use of the information that is gathered from other peer data sites. Other important problems, intrinsic within ∗Supported in part by an IBM fellowship. data mining systems that must not be ignored, include, first, the ability to take advantage of newly acquired information that was not previously available when models were computed and combine it with existing models, and second, the flexibility to incorporate new machine learning methods and data mining technologies. We explore these issues within the context of JAM and evaluate various proposed solutions through extensive empirical studies.

218 citations


Book
30 Sep 2007
TL;DR: Architecture of a Database System (AODS) as mentioned in this paper is an architectural discussion of DBMS design principles, including process models, parallel architecture, storage system design, transaction system implementation, query processor and optimizer architectures, and typical shared components and utilities.
Abstract: Database Management Systems (DBMSs) are a ubiquitous and critical component of modern computing, and the result of decades of research and development in both academia and industry. Architecture of a Database System presents an architectural discussion of DBMS design principles, including process models, parallel architecture, storage system design, transaction system implementation, query processor and optimizer architectures, and typical shared components and utilities. Successful commercial and open-source systems are used as points of reference, particularly when multiple alternative designs have been adopted by different groups. Historically, DBMSs were among the earliest multi-user server systems to be developed, and thus pioneered many systems design techniques for scalability and reliability now in use in many other contexts. While many of the algorithms and abstractions used by a DBMS are textbook material, Architecture of a Database System addresses the systems design issues that make a DBMS work. Architecture of a Database System is an invaluable reference for database researchers and practitioners and for those in other areas of computing interested in the systems design techniques for scalability and reliability that originated in DBMS research and development.

186 citations


Proceedings ArticleDOI
21 Oct 2007
TL;DR: This paper presents TOD, a portable Trace-Oriented Debugger for Java, which combines an efficient instrumentation for event generation, a specialized distributed database for scalable storage and efficient querying, support for partial traces in order to reduce the trace volume to relevant events, and innovative interface components for interactive trace navigation and analysis in the development environment.
Abstract: Omniscient debuggers make it possible to navigate backwards in time within a program execution trace, drastically improving the task of debugging complex applications. Still, they are mostly ignored in practice due to the challenges raised by the potentially huge size of the execution traces. This paper shows that omniscient debugging can be realistically realized through the use of different techniques addressing efficiency, scalability and usability. We present TOD, a portable Trace-Oriented Debugger for Java, which combines an efficient instrumentation for event generation, a specialized distributed database for scalable storage and efficient querying, support for partial traces in order to reduce the trace volume to relevant events, and innovative interface components for interactive trace navigation and analysis in the development environment. Provided a reasonable infrastructure, the performance of TOD allows a responsive debugging experience in the face of large programs.

156 citations


Proceedings ArticleDOI
15 Apr 2007
TL;DR: Kite combines schema matching and structure discovery techniques to find approximate foreign-key joins across heterogeneous databases and exploits the joins - discovered automatically across the databases - to enable fast and effective querying over the distributed data.
Abstract: Keyword search is a familiar and potentially effective way to find information of interest that is "locked" inside relational databases. Current work has generally assumed that answers for a keyword query reside within a single database. Many practical settings, however, require that we combine tuples from multiple databases to obtain the desired answers. Such databases are often autonomous and heterogeneous in their schemas and data. This paper describes Kite, a solution to the keyword-search problem over heterogeneous relational databases. Kite combines schema matching and structure discovery techniques to find approximate foreign-key joins across heterogeneous databases. Such joins are critical for producing query results that span multiple databases and relations. Kite then exploits the joins - discovered automatically across the databases - to enable fast and effective querying over the distributed data. Our extensive experiments over real-world data sets show that (1) our query processing algorithms are efficient and (2) our approach manages to produce high-quality query results spanning multiple heterogeneous databases, with no need for human reconciliation of the different databases.

118 citations


Patent
24 Sep 2007
TL;DR: In this paper, the authors propose a geographically distributed storage system for managing the distribution of data elements wherein requests for given data elements incur a geographic inertia, and the system comprises geographically distributed sites, each comprising a site storage unit for locally storing a portion of a globally coherent distributed database that includes the data elements and a local access point for receiving requests relating to ones of the data items.
Abstract: A geographically distributed storage system for managing the distribution of data elements wherein requests for given data elements incur a geographic inertia. The geographically distributed storage system comprises geographically distributed sites, each comprises a site storage unit for locally storing a portion of a globally coherent distributed database that includes the data elements and a local access point for receiving requests relating to ones of the data elements. The and geographically distributed storage system comprises a data management module for forwarding at least one requested data element to the local access point at a first of the geographically distributed sites from which the request is received and storing the at least one requested data element at the first site, thereby to provide local accessibility to the data element for future requests from the first site while maintaining the globally coherency of the distributed database.

96 citations


Journal ArticleDOI
TL;DR: This paper designs algorithms for both vertically and horizontally partitioned data, with cryptographically strong privacy, that works for two parties and above and is more efficient than the existing solution.

88 citations


Book
16 Jan 2007
TL;DR: This first of a kind book places spatial data within the broader domain of information technology (IT) while providing a comprehensive and coherent explanation of the guiding principles, methods, implementation and operational management of spatial databases within the workplace.
Abstract: This first of a kind book places spatial data within the broader domain of information technology (IT) while providing a comprehensive and coherent explanation of the guiding principles, methods, implementation and operational management of spatial databases within the workplace. The text explains the key concepts, issues and processes of spatial data implementation and provides a holistic management perspective that complements the technical aspects of spatial data stressed in other textbooks. In this respect, this book is unique in its coverage of spatial database principles and architecture, database modelling including UML, database and spatial data standards, spatial data infrastructure, database implementation, and workplace-oriented project management including user needs study and end user education. The text first overviews the current state of spatial information technology and it concludes with a speculative account of likely future developments. Cutting edge research and practical workplace needs are defined and explained. Topics covered, among others, include strategies for end user education, current spatial data standards and their importance, legal issues and liabilities in the ownership and use of spatial data, spatial metadata use within distributed databases, the Internet and Web-based solutions to database deployment, quality assurance and quality control in database implementation and use, spatial decision support, and spatial data mining. The book applies equally to senior undergraduate and graduate courses and students, as well as spatial data managers and practitioners already in the workplace. It will enhance their technical and human-resource based understanding of spatial data management. Certification courses that seek to prepare students for careers in the spatial information industry and courses targeted at enhancing needed geospatial workplace knowledge and skills will benefit greatly from its content.

79 citations


Proceedings ArticleDOI
11 Jun 2007
TL;DR: This paper studies the database selection problem for relational data sources, and proposes a method that effectively summarizes the relationships between keywords in a relational database based on its structure, and develops effective ranking methods based on the keyword relationship summaries.
Abstract: The wide popularity of free-and-easy keyword based searches over World Wide Web has fueled the demand for incorporating keyword-based search over structured databases. However, most of the current research work focuses on keyword-based searching over a single structured data source. With the growing interest in distributed databases and service oriented architecture over the Internet, it is important to extend such a capability over multiple structured data sources. One of the most important problems for enabling such a query facility is to be able to select the most useful data sources relevant to the keyword query. Traditional database summary techniques used for selecting unstructured datasources developed in IR literature are inadequate for our problem, as they do not capture the structure of the data sources. In this paper, we study the database selection problem for relational data sources, and propose a method that effectively summarizes the relationships between keywords in a relational database based on its structure. We develop effective ranking methods based on the keyword relationship summaries in order to select the most useful databases for a given keyword query. We have implemented our system on PlanetLab. In that environment we use extensive experiments with real datasets to demonstrate the effectiveness of our proposed summarization method.

78 citations


Patent
31 Aug 2007
TL;DR: In this paper, a distributed transformational spatio-temporal object relational (T-STOR) database management system (dbms) is described, where data is continuously input, analyzed, organized, reorganized and used for specific commercial and industrial applications.
Abstract: A system, methods and apparatus are described involving the self-organizing dynamics of networks of distributed computers. The system is comprised of complex networks of databases. The system presents a novel database architecture called the distributed transformational spatio-temporal object relational (T-STOR) database management system (dbms). Data is continuously input, analyzed, organized, reorganized and used for specific commercial and industrial applications. The system uses intelligent mobile software agents in a multi-agent system in order to learn, anticipate, and adapt and to perform numerous functions, including search, analysis, collaboration, negotiation, decision making and structural transformation. The system links together numerous complex systems involving distributed networks to present a novel model for dynamic adaptive computing systems, which includes plasticity of collective behavior and self-organizing behavior in intelligent system structures.

77 citations


Journal ArticleDOI
TL;DR: The results of performance evaluations demonstrate that the proposed hierarchical RFID network architecture reduces the network and database system loading by 41.8% and 83.2%, respectively.

72 citations


Patent
04 Jul 2007
TL;DR: In this paper, a neutral ontology model of a query front end characterized by ontology schemata is presented, which subsumes the plurality of different databases on the network in order to provide a common semantic interface for use in generating queries for data from any of the different databases.
Abstract: According to an embodiment, a method includes constructing a neutral ontology model of a query front end characterized by ontology schemata which subsume the plurality of different databases on the network in order to provide a common semantic interface for use in generating queries for data from any of the different databases, importing respective database metadata representing logical and physical structures of each database subscribed for receiving queries for data from the database using the query front end, constructing mappings of the database metadata representing the logical and physical structures of each subscribed database to the ontology schemata of the query front end, and storing the constructed mappings for use by the query front end for queries through the common semantic interface of the neutral ontology model for data from any of the different databases.

Journal ArticleDOI
TL;DR: This article shows how tools from information technology—specifically, secure multiparty computation and networking—can be used to perform statistically valid analyses of distributed databases, and presents protocols for securely performing regression, maximum likelihood estimation, and Bayesian analysis.
Abstract: In industrial and government settings, there is often a need to perform statistical analyses that require data stored in multiple distributed databases. However, the barriers to literally integrating these data can be substantial, even insurmountable. In this article we show how tools from information technology—specifically, secure multiparty computation and networking—can be used to perform statistically valid analyses of distributed databases. The common characteristic of these methods is that the owners share sufficient statistics computed on the local databases in a way that protects each owner's data from the other owners. Our focus is on horizontally partitioned data, in which data records rather than attributes are spread among the databases. We present protocols for securely performing regression, maximum likelihood estimation, and Bayesian analysis, as well as secure construction of contingency tables. We outline three current research directions: a software system implementing the protocols, se...

Patent
13 Feb 2007
TL;DR: In this article, a method of processing a transaction request at a database load balancer is proposed, where the transaction request is comprised of one or more operations, and each operation is associated with a database lock.
Abstract: A method of processing a transaction request at a database load balancer. The method comprises receiving the transaction request, where the transaction request is comprised of one or more operations; analyzing the transaction request to determine the one or more operations; associating one or more database locks with each of the one or more operations; analyzing one or more of the database locks to determine the one or more sequence numbers associated with each of the one or more operations; and transmitting the one or more operations with the associated database locks and the sequence numbers to one or more databases servers accessible to the database load balancer.

Proceedings ArticleDOI
21 May 2007
TL;DR: This paper presents extensions of bloom filter operations that are applicable to a wide range of usages, where bloom filters are facilitated for compressed set representation, and points out how they improve the performance of such distributed joins.
Abstract: Bloom filter based algorithms have proven successful as very efficient technique to reduce communication costs of database joins in a distributed setting. However, the full potential of bloom filters has not yet been exploited. Especially in the case of multi-joins, where the data is distributed among several sites, additional optimization opportunities arise, which require new bloom filter operations and computations. In this paper, we present these extensions and point out how they improve the performance of such distributed joins. While the paper focuses on efficient join computation, the described extensions are applicable to a wide range of usages, where bloom filters are facilitated for compressed set representation.

Patent
11 Sep 2007
TL;DR: In this paper, the authors describe a distributed database appliance in which two or more internet-worked data storage units are used to coordinate the storage and retrieval of database records, where a software application for executing database operations executes in a distributed fashion with portions of the database application executing on at least one central database processor and other portions executing on data storage processors.
Abstract: A database appliance in which two or more internetworked data storage units are used to coordinate the storage and retrieval of database records. One or more central database processing units are also associated with the data storage units. A network infrastructure provides the ability for the central database processors and storage processors to communicate as network nodes, with the network infrastructure using a communication protocol. A software application for executing database operations executes in a distributed fashion with portions of the database application executing on at least one central database processor and other portions executing on the data storage processors. At least a portion of the database application is implemented within and/or coordinated by a communication process that is executing the communication protocol. This coordination takes place such that data blocks are passed between the communication process and at least one portion of the database application process by passing data block reference information. In accordance with other aspects of the present invention, the communication process may have at least portions of the database application process executing within it. These database application operations executing within the same context as the communication process may include database operations such as join, sort, aggregate, restrict, reject, expression evaluation, statistical analysis or other operations.

Journal ArticleDOI
TL;DR: This work designs robust PIR protocols, i.e., protocols which still work correctly even if only some servers are available during the protocol's operation, and presents various robust Pir protocols giving different tradeoffs between the different parameters.
Abstract: An information-theoretic private information retrieval (PIR) protocol allows a user to retrieve a data item of its choice from a database replicated amongst several servers, such that each server gains absolutely no information on the identity of the item being retrieved. One problem with this approach is that current systems do not guarantee availability of servers at all times for many reasons, e.g., crash of server or communication problems. In this work we design robust PIR protocols, i.e., protocols which still work correctly even if only some servers are available during the protocol's operation. We present various robust PIR protocols giving different tradeoffs between the different parameters. We first present a generic transformation from regular PIR protocols to robust PIR protocols. We then present two constructions of specific robust PIR protocols. Finally, we construct robust PIR protocols which can tolerate Byzantine servers, i.e., robust PIR protocols which still work in the presence of malicious servers or servers with a corrupted or obsolete database.

Journal ArticleDOI
TL;DR: It is concluded that it will be impractical to rely only on one common ontology for resource discovery and that the approach of using human-created ontologies in combination with automatic concept space generation and associative retrieval is a powerful means to the discovery of geospatial resources.
Abstract: The geospatial community is moving toward distributed databases and Web services by following the general developments in information and communication technology. The sharing of resources across multiple information communities raises the need of new technologies that support resource discovery and information retrieval. This paper investigates if a common ontology is desirable and feasible for information retrieval in a European spatial data infrastructure. It does so by reviewing relevant literature and proposes an approach for the automatic updating of existing ontologies, designed to facilitate access to multilingual descriptors of geospatial resources. We demonstrate by means of a prototype of an experimental system that the proposed approach is feasible. The experimental system is unique because it integrates a gazetteer, the EuroVoc multilingual vocabulary, the GEMET multilingual thesaurus, an automatic concept space generator, and graph matching into one system. Based on our study, we conclude that it will be impractical to rely only on one common ontology for resource discovery. We also conclude that the approach of using human-created ontologies in combination with automatic concept space generation and associative retrieval is a powerful means to the discovery of geospatial resources. In the absence of a consistent use of semantic Web technologies, a centralized approach to indexing of metadata is required, which has consequences for architectural choices

Patent
Philip Thomas Hartman1
05 Oct 2007
TL;DR: In this article, the authors present a method and service for establishing a web-based network that includes an enterprise locking service, which is able to coordinate multiple, cooperating applications that need to ensure that one and only one user is modifying a database record at a given time.
Abstract: A method and service for establishing a web-based network that includes an enterprise locking service. The enterprise locking service is able to coordinate multiple, cooperating applications that need to ensure that one and only one user is modifying a database record at a given time. These database records may be stored in multiple databases having potentially different database record locking protocols. Through monitoring and tracking of requests for database locks, the enterprise locking service is also able to determine database usage trends under various metrics.

Patent
17 Aug 2007
TL;DR: In this paper, a distributed database replication method is proposed, where each node receives from other nodes other change records describing changes to the database at the other nodes, and a log of change records is accumulated each change record describes a change made to a row at a source node.
Abstract: In a method for distributed database replication, local change records describing the local changes to the database at a node are transmitted to other nodes Each node receives from the other nodes other change records describing changes to the database at the other nodes, and a log of change records is accumulated Each change record describes a change made to a row at a source node, eg, using data such as an identifier of the source node, a source node abstract clock value, a row identifier, and cell values of the row before and after the change Autonomously from the other nodes, each node applies the other change records to its local copy of the database, considering the other change records in source node abstract clock order The other change records are applied by checking for a collision between the other change records and the database and, when a collision is detected, selecting a persistent result by sequentially scanning through the log of change records in order of local abstract clock value to identify the persistent result

Proceedings ArticleDOI
15 Apr 2007
TL;DR: ICEDB incorporates two key ideas: a delay-tolerant continuous query processor, coordinated by a central server and distributed, across the mobile nodes, and algorithms for prioritizing certain query results to improve application-defined "utility" metrics.
Abstract: Current distributed database and stream processing systems assume that the network connecting nodes in the data processor is "always on," and that the absence of a network connection is a fault that needs to be masked to avoid failure. Several emerging wireless sensor network applications must cope with a combination of node mobility (e.g., sensors on moving cars) and high data rates ('media-rich sensors capturing videos, images, sounds, etc.). Due to their mobility, these sensor networks display intermittent and variable network connectivity, and often have to deliver large quantities of data relative to the bandwidth available during periods of connectivity. This paper describes ICEDB (Intermittently Connected Embedded Database), a continuous query processing system for intermittently connected mobile sensor networks. ICEDB incorporates two key ideas: (1) a delay-tolerant continuous query processor, coordinated by a central server and distributed, across the mobile nodes, and, (2) algorithms for prioritizing certain query results to improve application-defined "utility" metrics. We describe the results of several experiments that use data collected from a small deployed network of six cars driving in and around Boston and Seattle.

Journal ArticleDOI
01 Nov 2007
TL;DR: This paper comes up with a privacy-preserving distributed association rule mining protocol based on a new semi-trusted mixer model that can protect the privacy of each distributed database against the coalition up to n-2 other data sites or even the mixer if the mixer does not collude with any data site.
Abstract: Distributed data mining applications, such as those dealing with health care, finance, counter-terrorism and homeland defence, use sensitive data from distributed databases held by different parties. This comes into direct conflict with an individual's need and right to privacy. In this paper, we come up with a privacy-preserving distributed association rule mining protocol based on a new semi-trusted mixer model. Our protocol can protect the privacy of each distributed database against the coalition up to n-2 other data sites or even the mixer if the mixer does not collude with any data site. Furthermore, our protocol needs only two communications between each data site and the mixer in one round of data collection.

Patent
03 Aug 2007
TL;DR: In this article, the authors presented an unpowered low-cost "smart" micromunition unit for a weapon system for defense against an asymmetric attack upon ships and sea or land based facilities.
Abstract: The present invention provides an unpowered low-cost “smart” micromunition unit for a weapon system for defense against an asymmetric attack upon ships and sea or land based facilities. A plurality of air dropped micromunition units are each capable of detecting and tracking a plurality of maneuvering targets and of establishing a fast acting local area wireless communication network among themselves to create a distributed database stored in each deployed micromunition unit for sharing target and micromunition unit data. Each micromunition unit autonomously applies stored algorithms to data from the distributed database to select a single target for intercept and to follow an intercept trajectory to the selected target. It is emphasized that this abstract is provided to comply with the rules requiring an abstract that will allow a searcher or other reader to quickly ascertain the subject matter of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope of the claims.

Book ChapterDOI
TL;DR: This talk presents semantic gossiping, a model for the dynamic reorganization of semantic overlay networks resulting from information propagation through the network and local realignment of semantic relationships, and a quick glance on how this techniques can be implemented at the systems level, based on a peer-to-peer systems approach.
Abstract: Until recently, most data interoperability techniques involved central components, e.g., global schemas or ontologies, to overcome semantic heterogeneity for enabling transparent access to heterogeneous data sources. Today, however, with the democratization of tools facilitating knowledge elicitation in machine-processable formats, one cannot rely on global, centralized schemas anymore as knowledge creation and consumption are getting more and more dynamic and decentralized. Peer Data Management Systems (PDMS) implementing semantic overlay networks are a good example of this new breed of systems eliminating the central semantic component and replacing it through decentralized processes of local schema alignment and query processing. As a result semantic interoperability becomes an emergent property of the system. In this talk we provide examples of both structural and dynamic aspects of such emergent semantics systems based on semantic overlay networks. ?From the structural perspective we can show that the typical properties of self-organizing networks also appear in semantic overlay networks. They form directed, scalefree graphs. We present both analytical models for characterizing those graphs and empirical results providing insight on their quantitative properties. Then we present semantic gossiping, a model for the dynamic reorganization of semantic overlay networks resulting from information propagation through the network and local realignment of semantic relationships. The techniques we apply in that context are based on belief propagation, a distributed probabilistic reasoning technique frequently encountered in self-organizing systems. Finally we will give a quick glance on how this techniques can be implemented at the systems level, based on a peer-to-peer systems approach.

Journal ArticleDOI
TL;DR: The MammoGrid database appears to the user to be a single database, but the mammograms that comprise it are in fact retained and curated in the centres that generated them.


Patent
17 Oct 2007
TL;DR: In this article, the synchronization of data updates within a cluster of application servers is provided by having application servers themselves synchronize all updates to multiple redundant databases, precluding the need for database-level replication.
Abstract: Application-level replication, the synchronization of data updates within a cluster of application servers, may be provided by having application servers themselves synchronize all updates to multiple redundant databases, precluding the need for database-level replication. This may be accomplished by first sending a set of database modifications requested by the transaction to a first database. Then a message may be placed in one or more message queues, the message indicating the objects inserted, updated, or deleted in the transaction. Then a commit command may be sent to the first database. The set of database modifications and a commit command may then be sent to a second database. This allows for transparent synchronization of the databases and quick recovery from a database failure, while imposing little performance or network overhead.

Journal ArticleDOI
01 Nov 2007
TL;DR: A new algorithm BRANCA is proposed for performing top-k retrieval in distributed environments, integrating two orthogonal methodologies ''semantic caching'' and ''routing indexes'', which is able to solve a query by accessing only a small number of servers.
Abstract: The rapid development of networking technologies has made it possible to construct a distributed database that involves a huge number of sites Query processing in such a large-scaled system poses serious challenges beyond the scope of traditional distributed algorithms In this paper, we propose a new algorithm BRANCA for performing top-k retrieval in these environments Integrating two orthogonal methodologies ''semantic caching'' and ''routing indexes'', BRANCA is able to solve a query by accessing only a small number of servers Our algorithmic findings are accompanied with a solid theoretical analysis, which rigorously proves the effectiveness of BRANCA Extensive experiments verify that our technique outperforms the existing methods significantly

Proceedings ArticleDOI
01 May 2007
TL;DR: This paper focuses on query plan generation, execution and update algorithms for continuous range queries in PLACE* using QTP, a new Query-Track- Participate query processing model inside PLACE*.
Abstract: In this paper, we introduce PLACE*, a distributed spatio-temporal data stream management system for moving objects. PLACE* supports continuous spatio-temporal queries that hop among a network of regional servers. To minimize the execution cost, a new Query-Track- Participate (QTP) query processing model is proposed inside PLACE*. In the QTP model, a query is continuously answered by a querying server, a tracking server, and a set of participating servers. In this paper, we focus on query plan generation, execution and update algorithms for continuous range queries in PLACE* using QTP. An extensive experimental study demonstrates the effectiveness of the proposed algorithms in PLACE*.

Patent
03 Jul 2007
TL;DR: In this article, a system for providing database functionality on a peer-to-peer network is described that provides a highly scalable, fault tolerant, highly available, secure distributed transactions and reporting environment for application development and deployment.
Abstract: A system for providing database functionality on a peer-to-peer network is described that provides a highly scalable, fault tolerant, highly available, secure distributed transactions and reporting environment for application development and deployment.

Patent
19 Nov 2007
TL;DR: In this article, a system event monitor monitors the database systems' system conditions and operating environment events within the domain, and a multi-system regulator manages the domain and creates a dynamic event on one of the database system based on the system conditions.
Abstract: A computer-implemented apparatus, method, and article of manufacture provide the ability to manage a plurality of database systems. A domain contains a plurality of database systems. A system event monitor, on each of the database systems, monitors the database systems' system conditions and operating environment events within the domain. A multi-system regulator manages the domain, communicates with the system event monitor, and creates a dynamic event on one of the database systems based on the system conditions and operating environment events. The dynamic event causes an adjustment to a state of the database system.