scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Cooperative Information Systems in 1992"


Journal ArticleDOI
TL;DR: The coordination problem as it was viewed by the PGP algorithm is described, and a new model of task structures and coordination relationships is described that has less communication overhead and can be more easily adapted and extended to new styles of problem solving and new multi-agent environments that have different characteristics from the original DVMT.
Abstract: The distributed coordination problem can be described as "how should the local scheduling of activities at each agent be affected by non-local concerns and constraints?" Partial global planning (PGP) is a flexible approach to distributed coordination that allows agents to respond dynamically to their current situation. It is based on detecting relationships in the computational goal structures of the distributed agents. However, the detailed PGP mechanisms depend on the existence and availability of certain characteristics and structures that are idiosyncratic to the Distributed Vehicle Monitoring Testbed (DVMT). Generalized Partial Global Planning tries to extend the PGP approach by communicating more abstract and hierarchically organized information, detecting in a general way the coordination relationships that are needed by the partial global planning mechanisms, and separating the process of coordination from local scheduling. This new characterization of partial global planning has less communication overhead and can be more easily adapted and extended to new styles of problem solving and new multi-agent environments that have different characteristics from the original DVMT. This paper first describes the coordination problem as it was viewed by the PGP algorithm, and then extensions to that problem. It then briefly describes our model of task structures and coordination relationships. Finally, we show how the PGP algorithm, as an example, can be described using our method.

203 citations


Journal ArticleDOI
TL;DR: The concept of distributed object management is described, and its role in the development of these open, interoperable systems being developed at GTE Laboratories is identified.
Abstract: Future information processing environments will consist of a vast network of heterogeneous, autonomous, and distributed computing resources, including computers (from mainframe to personal), information-intensive applications, and data (files and databases) A key challenge in this environment is providing capabilities for combining this varied collection of resources into an integrated distributed system, allowing resources to be flexibly combined, and their activities coordinated, to address challenging new information processing requirements In this paper, we describe the concept of distributed object management, and identify its role in the development of these open, interoperable systems We identify the key aspects of system architectures supporting distributed object management, and describe specific elements of a distributed object management system being developed at GTE Laboratories

192 citations


Journal ArticleDOI
TL;DR: An overview of related research areas is given, notably Distributed Artificial Intelligence (DAI) and Distributed Databases (DDBs), and a generic architecture is presented which views an ICIS as a community of communicating and cooperating intelligent information agents.
Abstract: There is a growing belief that the next generation of information processing systems will be based on the paradigm of Intelligent and Cooperative Information Systems (ICISs). Such systems will involve information agents — distributed over the nodes of a common communication network — which work in a synergistic manner by exchanging information and expertise, coordinating their activities and negotiating how to solve parts of a common information-intensive problem. Along with motivating the importance of such systems the paper gives an overview of related research areas, notably Distributed Artificial Intelligence (DAI) and Distributed Databases (DDBs), and presents a generic architecture which views an ICIS as a community of communicating and cooperating intelligent information agents.

110 citations


Journal ArticleDOI
TL;DR: The paper presents inference rules in the natural semantics style for a variety of judgments involving descriptions, such as “subsumption” and “object membership”, and provides the full definition of subsumption in the Classic KBMS as a proof system.
Abstract: We first explore the similarities and differences between concept definitions in description/terminological logics such as KL-ONE, Classic, Back, Loom, etc. and the types normally encountered in programming languages. The similarities lead us to consider the application of natural semantics — the mechanism most frequently used to describe type systems — to the definition of knowledge base management systems that use such description logics. The paper presents inference rules in the natural semantics style for a variety of judgments involving descriptions, such as “subsumption” and “object membership”, and provides the full definition of subsumption in the Classic KBMS as a proof system. One of our objectives is to document some advantages of this approach, including the utility of multiple complementary semantics, and especially the characterization of implementations that are computationally tractable but are incomplete relative to standard denotational semantics.

64 citations


Journal ArticleDOI
TL;DR: A generic model is described, the Recursive Negotiation Model (RNM), that can serve as a basis for classifying and specifying where conflict resolution among multiple experts, viewpoints, or types of reasoning is needed in building a sophisticated CDPS system.
Abstract: Research in Cooperative Distributed Problem Solving (CDPS) considers how problem-solving tasks should be allocated among a group of agents and how the agents should coordinate their actions to achieve effective problem solving. For some CDPS systems, negotiation plays an important role in how agents cooperate. We define negotiation as the process of information exchange by which the agents act to resolve inconsistent views and to reach agreement on how they should work together in order to cooperate effectively. We describe a generic model, the Recursive Negotiation Model (RNM), that can serve as a basis for classifying and specifying where conflict resolution among multiple experts, viewpoints, or types of reasoning is needed in building a sophisticated CDPS system. This model defines where and how negotiation can be applied during problem solving based on structuring problem solving into four stages: problem formulation, focus-of-attention, allocation of goals or tasks to agents, and achievement of goals or tasks. We further discuss how the degree of agent participation in control decisions, including decisions about assigning responsibility to agents, influences the nature of negotiation within a particular system. Through this model, we emphasize that negotiation may be a recursive, complex, and pervasive process that is used to resolve conflicts in both domain-level and control-level problem solving. Finally, we survey existing negotiation frameworks and how they relate to our generic model.

63 citations


Journal ArticleDOI
TL;DR: Initial definitions for key concepts and terms in this new area of Intelligent and Cooperative Information Systems, identify potential core contributing technologies, illustrate the ICIS concept with example systems, and pose basic research questions are provided.
Abstract: Future information systems will involve large numbers of heterogeneous, intelligent agents distributed over large computer/communication networks. Agents may be humans, humans interacting with computers, humans working with computer support, and computer systems performing tasks without human intervention. We call such systems Intelligent and Cooperative Information Systems (ICISs). Although we can imagine extensions of capabilities of current ISs and of individual contributing core technologies, such as databases, artificial intelligence, operating systems, and programming languages, we cannot imagine the capabilities of ICISs which we believe will be based on extensions of these and other technologies. Neither do we know exactly what technologies and capabilities will be required, what challenges will arise, nor how the technologies might be integrated or work together to address the challenges. In this paper, we provide initial definitions for key concepts and terms in this new area, identify potential core contributing technologies, illustrate the ICIS concept with example systems, and pose basic research questions. We also describe the results of discussions on these topics that took place at the Second International Workshop on Intelligent and Cooperative Information Systems held in Como, Italy, October 1991. The workshop focused on core technologies for ICISs. The workshop and the results reflect the multi-disciplinary nature of this omerging area.

51 citations


Journal ArticleDOI
TL;DR: It is argued that characteristic features of classification, such as the form of inheritance allowed from classes to instances, having single or multiple classifications and the structure of the classification hierarchy, lead to a more expressive notation, acquired at relatively modest conceptual and computational costs.
Abstract: The power of any information system technology is delimited by the expressiveness of the notation used to represent information. Classification constitutes a fundamental notational structuring mechanism and, not surprisingly, is supported in one form or another by many formal notations intended for data or knowledge modelling. This paper presents an overview of various manifestations of classification mechanisms. Further, characteristic features of classification, such as the form of inheritance allowed from classes to instances, having single or multiple classifications and the structure of the classification hierarchy are identified, discussed, and contrasted. The paper also describes the classification mechanism offered by the knowledge representation language Telos. In Telos, classification is stratified and applicable not only to objects but also to (binary) relationships. The paper argues that these features lead to a more expressive notation, acquired at relatively modest conceptual and computational costs.

49 citations


Journal ArticleDOI
TL;DR: The concept of sharing processes allows agents to coordinate the sharing of ideas, tasks, and results by interacting protocol automata which can be dynamically adapted to situational requirements.
Abstract: Information systems support for design environments emphasizes object management and tends to neglect the growing demand for team support. Process management is often tackled by rigid technological protocols which are likely to get in the way of group productivity and quality. Group tools must be introduced in an unobtrusive way which extends current practice yet provides structure and documentation of development experiences. The concept of sharing processes allows agents to coordinate the sharing of ideas, tasks, and results by interacting protocol automata which can be dynamically adapted to situational requirements. Inconsistency is managed with equal emphasis as consistency. The sharing process approach has been implemented in a system called ConceptTalk which has been experimentally integrated with design environments for information and hypertext systems.

28 citations


Journal ArticleDOI
TL;DR: This paper develops an abstraction of the set of activities performed by scientists throughout the course of an experimental study, and based on that abstraction it proposes an EMS architecture that can support all such activities.
Abstract: In this paper, we identify some of the fundamental issues that must be addressed in designing a desktop Experiment Management System (EMS). We develop an abstraction of the set of activities performed by scientists throughout the course of an experimental study, and based on that abstraction we propose an EMS architecture that can support all such activities. The proposed EMS architecture is centered around the extensive use of conceptual schemas, which express the structure of information in experimental studies. Schemas are called to play new roles that are not usually found in traditional database systems. We provide a detailed exposition of these new roles and describe certain characteristics that the data model of the EMS must have in order for schemas expressed in it to successfully play these roles. Finally, we present the specifics of our own effort to develop an EMS, focusing on the main features of the data model of the system, which we have developed based on the needs of experiment management.

26 citations


Journal ArticleDOI
TL;DR: DBB represents a pragmatic blending of diverse technologies from the field of distributed AI, such as contract formation, organizational structuring, election for role assignment, and hierarchical control, that illustrate how integrating existing distributed AI technologies can meet realistic needs, and highlight open problems that require the development of new technologies.
Abstract: A distributed computer network management system consisting of cooperating autonomous computing agents allows network management to be more responsive due to information gathering and network recovery activities being performed in parallel. However, to perform these tasks, the network of agents requires a stable organizational infrastructure. In addition, to meet the needs of human network administrators, the distributed system must allow ultimate authority to be centralized at a single location. Distributed Big Brother (DBB) represents a pragmatic blending of diverse technologies from the field of distributed AI, such as contract formation, organizational structuring, election for role assignment, and hierarchical control. The result is an infrastructure for a network management system in which separate agents reconfigure themselves when hardware and software failures occur in order to assure the authority structure demanded by network operators. Our efforts illustrate how integrating existing distributed AI technologies can meet realistic needs, and highlight open problems that require the development of new technologies.

23 citations


Journal ArticleDOI
TL;DR: The issues involved in a concurrent execution of multidatabase transactions are discussed, a new concurrency control correctness criterion that is less restrictive than global serializability is proposed and the multidAtabase SQL can be extended to allow the user to specify multid atabase transactions in a nonprocedural way.
Abstract: In many application areas the information that may be of interest to a user is stored under the control of multiple, autonomous database systems. To support global transactions in a multidatabase environment, we must coordinate the activities of multiple Database Management Systems that were designed for independent, stand-alone operation. The autonomy and heterogeneity of these systems present a major impediment to the direct adaptation of transaction management mechanisms developed for distributed databases. In this paper we introduce a transaction model designed for a multidatabase environment. A multidatabase transaction is defined by providing a set of (local) sub-transactions, together with their precedence and dataflow requirements. Additionally, the transaction designer may specify failure atomicity and execution atomicity requirements of the multidatabase transaction. These high-level specifications are then used by the scheduler of a multidatabase transaction to assure that its execution satisfies the constraints imposed by the semantics of the application. Uncontrolled interleaving of multidatabase transactions may lead to the violation of interdatabase integrity constraints. We discuss the issues involved in a concurrent execution of multidatabase transactions and propose a new concurrency control correctness criterion that is less restrictive than global serializability. We also show how the multidatabase SQL can be extended to allow the user to specify multidatabase transactions in a nonprocedural way.

Journal ArticleDOI
TL;DR: This paper proposes an architecture for modelling intelligent information systems and discusses how this architecture supports the communication of knowledge in such information systems.
Abstract: An intelligent information system may be composed of hundreds or thousands of entities each of which may possess some part of the overall knowledge of an organization. In such an environment there is a need for these entities to communicate in order to share knowledge and to cooperate in accomplishing organizational activities. In this paper we propose an architecture for modelling intelligent information systems and discuss how this architecture supports the communication of knowledge in such information systems.

Journal ArticleDOI
TL;DR: A number of key research directions and open problems to be explored as steps towards improving the effectiveness of object technology are highlighted.
Abstract: Object-orientation offers more than just objects, classes and inheritance as means to structure applications. It is an approach to application development in which software systems can be constructed by composing and refining pre-designed, plug-compatible software components. But for this approach to be successfully applied, programming languages must provide better support for component specification and software composition, the software development life-cycle must separate the issues of generic component design and reuse from that of constructing applications to meet specific requirements, and, more generally, the way we develop, manage, exchange and market software must adapt to better support large-scale reuse for software communities. In this paper we shall explore these themes and we will highlight a number of key research directions and open problems to be explored as steps towards improving the effectiveness of object technology.

Journal ArticleDOI
Won Kim, Yoon-Joon Lee1, Jungyun Seo1
TL;DR: The framework is first cast in the context of the relational model of data, and is then extended to account for object-oriented concepts that constitute an object- oriented data model, including nested objects, methods, and inheritance hierarchy.
Abstract: An active database system reacts to a set of external events such as a timer interrupt or access to a particular object in the database. A trigger is a general mechanism for active data management, both in the context of a centralized system or a distributed system (including that of autonomous and cooperating agents). It consists of three parts; event specification, integrity constraint specification, and action specification. The event specification in a trigger is a set of events which will cause the condition in the constraint specification to be checked. If the condition is true, the actions in the action specification will be initiated. In this paper, we develop a framework for supporting triggers in object-oriented database systems. The framework consists of a categorization for each of the three components of a trigger. The framework is first cast in the context of the relational model of data, and is then extended to account for object-oriented concepts that constitute an object-oriented data model, including nested objects, methods, and inheritance hierarchy.

Journal ArticleDOI
TL;DR: This paper presents OMNI, a framework for integrating existing knowledge-based systems in a way that they can cooperate during problem-solving while they remain distributed over a computing environment.
Abstract: Over the past ten years a myriad of knowledge-based expert systems have been developed and deployed. These systems have a narrow scope and usually operate in stand-alone mode. They also follow different implementation philosophies and use a variety of reasoning methods. To address problems of wider scope, researchers have developed systems that utilize either centralized or distributed computational models. Each of these systems is homogeneous, and due to the way developed, prohibitively expensive for real-world settings. In this paper we present OMNI, a framework for integrating existing knowledge-based systems in a way that they can cooperate during problem-solving while they remain distributed over a computing environment.

Journal ArticleDOI
TL;DR: This paper describes and demonstrates the supervenience architecture, a multilevel architecture for integrating planning and reacting in complex, dynamic environments, and an implementation of the architecture called APE (for Abstraction-Partitioned Evaluator).
Abstract: For intelligent systems to interact with external agents and changing domains, they must be able to perceive and to affect their environments while computing long term projection (planning) of future states. This paper describes and demonstrates the supervenience architecture, a multilevel architecture for integrating planning and reacting in complex, dynamic environments. We briefly review the underlying concept of supervenience, a form of abstraction with affinities both to abstraction in AI planning systems, and to knowledge-partitioning schemes in hierarchical control systems. We show how this concept can be distilled into a strong constraint on the design of dynamic-world planning systems. We then describe the supervenience architecture and an implementation of the architecture called APE (for Abstraction-Partitioned Evaluator). The application of APE to the HomeBot domain is used to demonstrate the capabilities of the architecture.

Journal ArticleDOI
TL;DR: The environment and the methods for its use are described, and its effect on the software process is evaluated, and an extended discussion of how TEDIUM classifies, represents, and manages this knowledge is discussed.
Abstract: This paper examines the future of software engineering with particular emphasis on the development of intelligent and cooperating information systems (ICISs). After a brief historical overview, the applications of the 1990s are characterized as having open requirements, depending on reuse, emphasizing integration, and relying on diverse computational models. It is suggested that experience with TEDIUM, an environment for developing interactive information systems, offers insight into how software engineering can adjust to its new challenges. The environment and the methods for its use are described, and its effect on the software process is evaluated. Because the environment employs a knowledge-based approach to software development, there is an extended discussion of how TEDIUM classifies, represents, and manages this knowledge. A final section relates the experience with TEDIUM to the demands of ICIS development and evolution.

Journal ArticleDOI
TL;DR: This paper defines several different types of object systems in terms of their interfaces and capabilities from the viewpoint of how these support the requirements of cooperative information systems and examines the distinguishing features and general architecture of systems of each type in the light of a general model of OMS architecture.
Abstract: Much work has been done in the last decade in the related areas of object-oriented programming languages and object-oriented databases. Researchers from both areas now seem to be working toward a common end, that of an object management system, or OMS. An OMS is constructed similarly to an OODB but provides a general purpose concurrent object-oriented programming language as well, complementing the OODB query facilities. In this paper, we will define several different types of object systems (object servers, persistent OOPL’s, OODB’s and OMS’s) in terms of their interfaces and capabilities from the viewpoint of how these support the requirements of cooperative information systems. We will examine the distinguishing features and general architecture of systems of each type in the light of a general model of OMS architecture.


Journal ArticleDOI
TL;DR: An application is implemented which, through its own experience, learns how to control the traffic in a telephone network, and the results for one set of experiments are shown.
Abstract: Intelligent and Cooperative Information Systems (ICIS) will have large numbers of distributed, heterogeneous agents interacting and cooperating to solve problems regardless of location, original mission, or platform. The agents in an ICIS will adapt to new and possibly surprising situations, preferably without human intervention. These systems will not only control a domain, but also will improve their own performance over time, that is, they will learn. This paper describes five heterogeneous learning agents and how they are integrated into an Integrated Learning System (ILS) where some of the agents cooperate to improve performance. The issues involve coordinating distributed, cooperating, heterogeneous problem-solvers, combining various learning paradigms, and integrating different reasoning techniques. ILS also includes a central controller, called The Learning Coordinator (TLC), that manages the control of flow and communication among the agents, using a high-level communication protocol. In order to demonstrate the generality of the ILS architecture, we implemented an application which, through its own experience, learns how to control the traffic in a telephone network, and show the results for one set of experiments. Options for enhancements of the ILS architecture are also discussed.