scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Cooperative Information Systems in 1998"


Journal ArticleDOI
TL;DR: This paper formalizes a graphical conceptual model for data warehouses, called Dimensional Fact model, and proposes a semi-automated methodology to build it from the pre-existing schemes describing the enterprise relational database.
Abstract: Data warehousing systems enable enterprise managers to acquire and integrate information from heterogeneous sources and to query very large databases efficiently. Building a data warehouse requires adopting design and implementation techniques completely different from those underlying operational information systems. Though most scientific literature on the design of data warehouses concerns their logical and physical models, an accurate conceptual design is the necessary foundations for building a DW which is well-documented and fully satisfies requirements. In this paper we formalize a graphical conceptual model for data warehouses, called Dimensional Fact model, and propose a semi-automated methodology to build it from the pre-existing (conceptual or logical) schemes describing the enterprise relational database. The representation of reality built using our conceptual model consists of a set of fact schemes whose basic elements are facts, measures, attributes, dimensions and hierarchies; other featur...

536 citations


Journal ArticleDOI
TL;DR: A formal semantics for this operation of merging first-order theories is proposed and it is shown that it has desirable properties, including abiding by majority rule in case of conflict and syntax independence.
Abstract: The problem of integrating information from conflicting sources comes up in many current applications, such as cooperative information systems, heterogeneous databases, and multiagent systems. We model this by the operation of merging first-order theories. We propose a formal semantics for this operation and show that it has desirable properties, including abiding by majority rule in case of conflict and syntax independence. We apply our semantics to the special case when the theories to be merged represent relational databases under integrity constraints. We then present a way of merging databases that have different or conflicting schemas caused by problems such as synonyms, homonyms or type conflicts mentioned in the schema integration literature.

212 citations


Journal ArticleDOI
TL;DR: The introduction of an itinerary concept allows to specify an agent's travel plan flexibly and provides the agent system with the possibility to postpone the visit of currently unavailable nodes or to choose alternative nodes in case of node failures.
Abstract: The use of mobile agent technology has been proposed for various fault-sensitive application areas, including electronic commerce and system management. A prerequisite for the use of mobile agents in these environments is that agents have to be executed reliably, independent of communication and node failures. In this article, we present two approaches improving the level of fault-tolerance in agent execution. The introduction of an itinerary concept allows to specify an agent's travel plan flexibly and provides the agent system with the possibility to postpone the visit of currently unavailable nodes or to choose alternative nodes in case of node failures. The second approach is a recently proposed fault-tolerant protocol to ensure the exactly-once execution of an agent. With this protocol, agents are preformed in stages. Each stage consists of a number of nodes. One of these nodes executes the agent while the others monitor the execution. After a summary of this protocol, we focus on the construction of stages. In particular, we investigate how the number of nodes per stage influences the probability of an agent to be blocked due to failures and which nodes should be selected when forming a stage to minimize the protocol overhead.

66 citations


Journal ArticleDOI
TL;DR: An overview of the respective distributed systems technologies which are available for open and heterogeneous electronic commerce applications and some related projects conducted by the authors jointly with international partners in order to realize some of the important new functions of a system infrastructure for open distributed electronic Commerce applications.
Abstract: Based on the specific characteristics and requirements for an adequate electronic commerce system support, this article gives an overview of the respective distributed systems technologies which are available for open and heterogeneous electronic commerce applications. Abstracting from basic communication mechanisms such as (transactionally secure) remote procedure calls and remote database access mechanisms, this includes service trading and brokerage functions as well as security aspects including such as notary and non-repudiation functions. Further important elements of a system infrastructure for electronic commerce applications are: Common middleware infrastructures, componentware techniques, distributed and mobile agent technologies etc. As electronic transactions enter the phase of performance, increasingly new and important functions are required. Among these are: Negotiation protocols to support both the settlement and fulfillment of electronic contracts as well as ad-hoc workflow management support for compound and distributed services in electronic commerce applications. In addition to an overview of the state of the art of the respective technology, the article briefly presents some related projects conducted by the authors jointly with international partners in order to realize some of the important new functions of a system infrastructure for open distributed electronic commerce applications.

50 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a market-based workflow management approach to work-flow specification and execution which regards tasks performed in a workflow as goods traded in an electronic market, and is used at runtime to execute workflows such that actual cost and execution times are balanced and optimized.
Abstract: This paper presents market-based workflow management, a novel approach to work-flow specification and execution which regards tasks performed in a workflow as goods traded in an electronic market. Information about expected cost and execution time is considered for activity specifications, and is used at runtime to execute workflows such that actual cost and execution times are balanced and optimized. To that end, task assignment follows a bidding protocol, in which each eligible processing entity specifies at which price and in which time interval it can execute the activity. The winner of a specific bidding process is requested to execute the activity, and earns the amount specified in the corresponding bid. Market-based workflow management thus allows to optimize workflow executions with respect to execution time and overall cost; additionally the trading of activities provides an incentive for processing entities to engage in a workflow.

39 citations


Journal ArticleDOI
TL;DR: The paper describes a CORBA-based research prototype for an electronic broker in business-to-business electronic commerce, with special emphasis on electronic commerce protocol standards.
Abstract: Distributed object standards provide a key to building interoperable applications that can run on a range of platforms. The paper describes a CORBA-based research prototype for an electronic broker in business-to-business electronic commerce. High-level IDL specifications are used to achieve interoperability between components of the electronic marketplace. The two key functionalities of the electronic broker are the ability to dynamically gather information from remote electronic catalogs and the support for negotiations through auction mechanisms. The paper discusses the functionality and the design of the electronic broker and gives an overview of current extensions of the prototype. As application-level interoperability is a crucial precondition for many brokerage services, we put special emphasis on electronic commerce protocol standards.

36 citations


Journal ArticleDOI
TL;DR: The application of a new variant of high-level Petri nets, the so-called SGML nets, for modeling business processes in the area of Internet-based commerce are discussed.
Abstract: This article discusses the application of a new variant of high-level Petri nets, the so-called SGML nets, for modeling business processes in the area of Internet-based commerce. SGML nets are designed to capture the process of generating and manipulating structured documents based on the international standard SGML. Since the currently most relevant document standards on the Internet are HTML (an SGML application) and XML (a subset of SGML), SGML nets offer an elegant way to integrate central aspects of Electronic Commerce applications, such as the generation of online product catalogs, processing of online orders, and electronic document interchange between companies, into a unified formal workflow model. The article gives an introduction to the central concepts of SGML nets and includes an example of their application from the area of online order processing.

22 citations


Journal ArticleDOI
TL;DR: A spatial/temporal query language, the ∑QL, which is based upon the σ-operator sequence and in practice expressible in an SQL-like syntax is described.
Abstract: To support the retrieval, fusion and discovery of multimedia information, a spatial/temporal query language for multiple data sources is needed. In this paper we describe a spatial/temporal query language, the ∑QL, which is based upon the σ-operator sequence and in practice expressible in an SQL-like syntax. The general σ-operator and temporal σ-operator are explained, and applications of the σ-query language to vertical/horizontal reasoning and hypermapped virtual world are discussed.

18 citations


Journal ArticleDOI
TL;DR: The systematic design and development of a distributed query scheduling service (DQS) is presented in the context of DIOM, a distributed and interoperable query mediation system, with an extensible architecture for distributed query processing, a three-phase optimization algorithm for generating efficient query execution schedules, and a prototype implementation.
Abstract: We present the systematic design and development of a distributed query scheduling service (DQS) in the context of DIOM, a distributed and interoperable query mediation system.26 DQS consists of an extensible architecture for distributed query processing, a three-phase optimization algorithm for generating efficient query execution schedules, and a prototype implementation. Functionally, two important execution models of distributed queries, namely moving query to data or moving data to query, are supported and combined into a unified framework, allowing the data sources with limited search and filtering capabilities to be incorporated through wrappers into the distributed query scheduling process. Algorithmically, conventional optimization factors (such as join order) are considered separately from and refined by distributed system factors (such as data distribution, execution location, heterogeneous host capabilities), allowing for stepwise refinement through three optimization phases: Compilation, parallelization, site selection and execution. A subset of DQS algorithms has been implemented in Java to demonstrate the practicality of the architecture and the usefulness of the distributed query scheduling algorithm in optimizing execution schedules for inter-site queries.

16 citations


Journal ArticleDOI
TL;DR: This paper presents the design philosophy, the architecture and the core of the WAG system, a system allowing the user to query (instead of browsing) the Web by constructing a personalized database, pertinent to the user's interests.
Abstract: The Internet revolution has made an enormous quantity of information available to a disparate variety of people. The amount of information, the typical access modality (that is, browsing), and the rapid growth of the Net, force the user, while searching for the information of interest, to dip into multiple sources, in a labyrinth of millions of links. Web-at-A-Glance (WAG) is a system allowing the user to query (instead of browsing) the Web. WAG performs this ambitious task by constructing a personalized database, pertinent to the user's interests. The system semi-automatically gleans the most relevant information from a Web site or several Web sites, stores them into a database cooperatively designed with the user, and allows her/him to query such a database through a visual interface equipped with a powerful multimedia query language. This paper presents the design philosophy, the architecture and the core of the WAG system. A prototype WAG is being implemented to test the feasibility of the proposed approach.

7 citations


Journal ArticleDOI
TL;DR: The issue of pre-analysing and enforcing inter-task dependencies is studied, the protocol and the theory behind it are presented, along with examples and discussions on ways to improve the performance.
Abstract: Workflow techniques have gained a lot of attention as a means to support advanced business applications such as cooperative information systems and process re-engineering but also as a means to integrate legacy systems. Inter-task dependencies, described separately from the other parts of the workflow, have been recognized as a valuable method in describing certain restrictions on the executions of workflows. In this paper, we study the issue of pre-analysing and enforcing inter-task dependencies. The protocol and the theory behind it are presented, along with examples and discussions on ways to improve the performance. The idea is to present the meaning of a dependency through an automaton which accepts the sequences of events tied by the dependency through an automaton which accepts the sequences of events tied by the dependency. We prune from the automata certain paths that cannot be followed due to the conflicting paths in other automata and record the feasible event sequences in a special data structure to be used during run-time. We show the correctness of the algorithms and also show that our run-time algorithm is linear, whereas the original approach suggested by MCC in Refs. 5 and 6 is exponential, when resolution of one event is concerned.

Journal ArticleDOI
Changkyu Choi1, Ju-Jang Lee1
TL;DR: The local minima free search algorithm using chaos is proposed for an unstructured search space and the validity of the proposed method wil be verified in simulation examples of the function minimization problem and the motion planning problem of a mobile robot.
Abstract: In this paper, the local minima free search algorithm using chaos is proposed for an unstructured search space. The problem is that given the quality function, find the value of a configuration that minimizes the quality function. The proposed algorithm started basically from the gradient search technique but at the prescribed points, that is, local minimum points, which are to be automatically detected the chaotic jump is introduced by the dynamics of a chaotic neuron. Chaotic motions are mainly because of the Gaussian function having a hysteresis as a refractoriness. In order to enhance the probability of finding the global minimum, a parallel search strategy is also given. The validity of the proposed method wil be verified in simulation examples of the function minimization problem and the motion planning problem of a mobile robot.

Journal ArticleDOI
TL;DR: By employing service distribution, BARTER is expected to scale well, meeting the demands of a world-wide setting, over which it is intended to operate.
Abstract: BARTER (a Backbone ARchitecture for Trade of ElectRonic content) is a payment infrastructure that facilitates digital content trade over an open network. BARTER is designed to operate over a large-scale, global and heterogeneous communication network. The BARTER protocols address two vital requirements, neglected from existing electronic commerce systems: scalability and transactional efficiency. These protocols possess strong properties such as delivery atomicity, agreement validation and the ability to resolve several classes of disputes. BARTER's novelty is twofold: First, BARTER servers are not required to perform expensive cryptographic operations such as commitment verification; commitments are cross-verified by the parties themselves, thus reducing the overhead of online transaction processing by orders of magnitude. Consequently, BARTER can serve as an efficient online/offline clearing infrastructure. Second, BARTER integrates scalability considerations into several system components (the authentication subsystem, the account management subsystem, and the maintenance of global data) that are likely to suffer service degradation in a world-wide setting. In addressing these issues, BARTER takes into account the inherent asynchronous, unreliable, insecure and failure-prone environment assumptions. We contend that by employing service distribution, BARTER is expected to scale well, meeting the demands of a world-wide setting, over which it is intended to operate.

Journal ArticleDOI
TL;DR: In this article the work of the Jupiter Project and its successor LIOM (Legacy system Interoperability using Object-oriented Methods) are discussed and assessed.
Abstract: We believe that the typical hospital computing environment is especially fruitful as a domain for investigating the problems of interoperability and cooperation. We state this belief as hospital computing environments consist of a heterogeneous collection of autonomous information systems. These systems range from centralised hospital-wide systems, such as patient administration systems, to departmental systems such as pharmacy stock-control, laboratory information systems, accident and emergency systems and so on. Many of these are legacy systems which have been operating for many years and are difficult to integrate and virtually impossible to rewrite. In this article we discuss and assess the work of the Jupiter Project and its successor LIOM (Legacy system Interoperability using Object-oriented Methods).