scispace - formally typeset
Search or ask a question

Showing papers in "Operating Systems Review in 1995"


Journal ArticleDOI
TL;DR: This paper discusses a possible weakness in the proposed protocol, develops some enhancements and simplifications, and provides a security analysis of the resultant minimal EKE protocol, which yields a protocol with some interesting properties.
Abstract: In their recent paper, "Encrypted Key Exchange: Password-based Protocols Secure Against Dictionary Attacks," Bellovin and Merritt propose a novel and elegant method for safeguarding weak passwords. This paper discusses a possible weakness in the proposed protocol, develops some enhancements and simplifications, and provides a security analysis of the resultant minimal EKE protocol. In addition, the basic 2-party EKE model is extended to the 3-party setting; this yields a protocol with some interesting properties. Most importantly, this paper illustrates, once again, the subtlety associated with designing password-based protocols.

292 citations


Journal ArticleDOI
TL;DR: This work shows that 3-party-based authentication protocols are not resistant to a new type of attack called "undetectable on-line password guessing attack", where the authentication server responds and leaks verifiable information for an attacker to verify his guess.
Abstract: Several 3-party-based authentication protocols have been proposed, which are resistant to off-line password guessing attacks. We show that they are not resistant to a new type of attack called "undetectable on-line password guessing attack". The authentication server is not able to notice this kind of attack from the clients' (attacker's) requests, because they don't include enough information about the clients (or attacker). Either freshness or authenticity of these requests is not guaranteed. Thus the authentication server responses and leaks verifiable information for an attacker to verify his guess.

281 citations


Journal ArticleDOI
TL;DR: The SPIN operating system enables system services to be defined in an application-specific fashion through an extensible microkernel and offers applications fine-grained control over a machine's logical and physical resources through run-time adaptation of the system to application requirements.
Abstract: Application domains such as multimedia, databases, and parallel computing, require operating system services with high performance and high functionality. Existing operating systems provide fixed interfaces and implementations to system services and resources. This makes them inappropriate for applications whose resource demands and usage patterns are poorly matched by the services provided. The SPIN operating system enables system services to be defined in an application-specific fashion through an extensible microkernel. It offers applications fine-grained control over a machine's logical and physical resources through run-time adaptation of the system to application requirements.

103 citations


Journal ArticleDOI

81 citations


Journal ArticleDOI
TL;DR: This paper identifies application-aware adaptation as an essential capability of mobile clients, and provides an overview of Odyssey, an architecture that supports this capability.
Abstract: This paper identifies application-aware adaptation as an essential capability of mobile clients, and provides an overview of Odyssey, an architecture that supports this capability. Functionality that has hitherto been implemented monolithically must now be split between the operating system and individual applications. The role of the operating system is to sense external events, and to monitor and allocate scarce resources. In contrast, the role of individual applications is to adapt to changing conditions by using the information and resources provided by the operating system.

76 citations


Journal ArticleDOI
TL;DR: In the Solaris 2 implementation of UNIX [Eykholt 92] [Kleiman 92], Interrupts are converted into threads using a low overhead technique, which allows a single synchronization model to be used throughout the kernel.
Abstract: Most operating system implementations contain two fundamental forms of asynchrony; processes (or equivalently, internal threads) and interrupts. Processes (or threads) synchronize using primitives such as mutexes and condition variables, while interrupts are synchronized by preventing their occurrence for a period of time. The latter technique not only is expensive, but it locks out interrupts on the possibility that an interrupt will occur and interfere with the particular critical section of code that was interrupted.In the Solaris 2 implementation of UNIX [Eykholt 92] [Kleiman 92], these two forms are unified into a single model, threads. Interrupts are converted into threads using a low overhead technique. This allows a single synchronization model to be used throughout the kernel. In addition, it lowers the number of times in which interrupts are locked out, it removes the overhead of masking interrupts, and allows modular code to be oblivious to the interrupt level it is called at.

66 citations


Journal ArticleDOI
TL;DR: First experiences with the implementation show that the design for a migration system for groups of collaborating processes between Unix systems without kernel modifications reaches performance figures for the migration close to those of real distributed operating system.
Abstract: In the past, several process migration facilities for distributed systems have been developed. Due to the complex nature of the subject, all those facilities have limitations that make them usable for only limited classes of applications and environments. We discuss some of the usual limitations and possible solutions. Specifically, we focus on migration of groups of collaborating processes between Unix systems without kernel modifications, and from this we derive the design for a migration system. First experiences with our implementation show that we reach performance figures for the migration that are close to those of real distributed operating system.

59 citations


Journal ArticleDOI
TL;DR: The question of what techniques can be employed in mobile computer operating systems that can reduce the power consumption of today's mobile computing devices is explored.
Abstract: Many factors have contributed to the birth and continued growth of mobile computing, including recent advances in hardware and communications technology. With this new paradigm however come new challenges in computer operating systems development. These challenges include heretofore relatively unusual items such as frequent network disconnections, communications bandwidth limitations, resource restrictions, and power limitations. It is the last of these challenges that we shall explore in this paper---that is the question of what techniques can be employed in mobile computer operating systems that can reduce the power consumption of today's mobile computing devices.

59 citations


Journal ArticleDOI
TL;DR: To put abstractions traditionally implemented by the kernel out into user-space, where user-level libraries and servers abstract the exposed hardware resources, a new operating system structure, exokernel, is defined that safely exports the resources defined by the underlying hardware.
Abstract: To provide modularity and performance, operating system kernels should have only minimal embedded functionality. Today's operating systems are large, inefficient and, most importantly, inflexible. In our view, most operating system performance and flexibility problems can be eliminated simply by pushing the operating system interface lower. Our goal is to put abstractions traditionally implemented by the kernel out into user-space, where user-level libraries and servers abstract the exposed hardware resources. To achieve this goal, we have defined a new operating system structure, exokernel, that safely exports the resources defined by the underlying hardware. To enable applications to benefit from full hardware functionality and performance, they are allowed to download additions to the supervisor-mode execution environment. To guarantee that these extensions are safe, techniques such as code inspection, inlined cross-domain procedure calls, and secure languages are used. To test and evaluate exokernels and their customization techniques a prototype exokernel, Aegis, is being developed.

52 citations


Journal ArticleDOI
TL;DR: A cache kernel is developed, a new type of micro-kernel that supports operating system configurations across these dimensions and is fault-tolerant because it is protected from the rest of the operating system (and applications), it is replicated in large-scale configurations and it includes audit and recovery mechanisms.
Abstract: Operating system design has had limited success in providing adequate application functionality and a poor record in avoiding excessive growth in size and complexity, especially with protected operating systems. Applications require far greater control over memory, I/O and processing resources to meet their requirements. For example, database transaction processing systems include their own "kernel" which can much better manage resources for the application than can the application-ignorant general-purpose conventional operating system mechanisms. Large-scale parallel applications have similar requirements. The same requirements arise with servers implemented outside the operating system kernel.In our research, we have been exploring the approach of making the operating system kernel a cache for active operating systems objects such as processes, address spaces and communication channels, rather than a complete manager of these objects. The resulting system is smaller than recent so-called micro-kernels, and also provides greater flexibility for applications, including real-time applications, database management systems and large-scale simulations. As part of this research, we have developed what we call a cache kernel, a new type of micro-kernel that supports operating system configurations across these dimensions.The cache kernel can also be regarded as providing a hardware adaptation layer (HAL) to operating system services rather than trying to just provide a key subset of OS services, as has been the common approach in previous micro-kernel work. However, in contrast to conventional HALs, the cache kernel is fault-tolerant because it is protected from the rest of the operating system (and applications), it is replicated in large-scale configurations and it includes audit and recovery mechanisms. A cache kernel has been implemented on a scalable shared-memory and networked multi-computer [2] hardware which provides architectural support for the cache kernel approach.Figure 1 illustrates a typical target configuration. There is an instance of the cache kernel per multi-processor module (MPM), each managing the processors, second-level cache and network interface of that MPM. The cache kernel executes out of PROM and local memory of the MPM, making it hardware-independent of the rest of the system except for power. That is, the separate cache kernels and MPMs fail independently. Operating system services are provided by application kernels, server kernels and conventional operating system emulation kernels in conjunction with privileged MPM resource managers (MRM) that execute on top of the cache kernel. These kernels may be in separate protected address spaces or a shared library within a sophisticated application address space. A system bus connects the MPMs to each other and the memory modules. A high-speed network interface per MPM connects this node to file servers and other similarly configured processing nodes. This overall design can be simplified for real-time applications and similar restricted scenarios. For example, with relatively static partitioning of resources, an embedded real-time application could be structured as one or more application spaces incorporating application kernels as shared libraries executing directly on top of the cache kernel.

51 citations


Journal ArticleDOI
TL;DR: This project introduces a method, termed the System Vulnerability Index (SVI), that analyzes a number of factors that affect security, and evaluated and combined, through the use of special rules, to provide a measure of vulnerability.
Abstract: The lack of a standard gauge for quantifying computer system vulnerability is a hindrance to communicating information about vulnerabilities, and is thus a hindrance to reducing those vulnerabilities. The inability to address this issue through uniform semantics often leads to uncoordinated efforts at combating exposure to common avenues of exploitation. The de-facto standard for evaluating computer security is the government's Trusted Computer Evaluation Criteria, also known as the Orange Book. However, it is a generally accepted fact that the majority of non-government multi-user computer systems are classified into one of its two lower classes. The link between the higher classes and government classified data, makes the measure unsuitable for commercial use.This project presents a feasible approach for resolving this problem by introducing a standardized assessment. It introduces a method, termed the System Vulnerability Index (SVI), that analyzes a number of factors that affect security. These factors are evaluated and combined, through the use of special rules, to provide a measure of vulnerability. The strength of this method is in its abstraction of the problem, which makes it applicable to various operating systems and hardware implementations. User and superuser actions, as well as clues to a potentially breached state of security, serve as the basis for the security relevant factors. Facts for assessment are presented in a form suitable for implementation in a rule-based expert system.

Journal ArticleDOI
TL;DR: It is claimed that using a uncertified key prudently can give performance advantages and not necessarily reduces the security of authentication protocols, as long as the validity of the key can be verified at the end of an authentication process.
Abstract: Most authentication protocols for distributed systems achieve identification and key distributions on the belief that the use of a uncertified key, i.e. the key whose freshness and authenticity cannot be immediately verified by its receiving principal while being received, should be avoided during the mid-way of an authentication process. In this paper we claim that using a uncertified key prudently can give performance advantages and not necessarily reduces the security of authentication protocols, as long as the validity of the key can be verified at the end of an authentication process. A nonce-based authentication protocol using uncertified keys is proposed. Its total number of messages is shown to be the minimal of all authentication protocols with the same formalized goals of authentication. The properties which make the protocol optimal in terms of message complexity are elaborated, and a formal logical analysis to the protocol is performed. The protocol is extended to counter the session key compromise problem and to support repeated authentication, in a more secure and flexible way without losing its optimality.


Journal ArticleDOI
TL;DR: The principle of upcall-driven application structuring whereby communications events are system rather than application initiated and the principle of decoupling of control transfer and data transfer are discussed.
Abstract: We propose some architectural principles we have found useful for the support of continuous media applications in a microkernel environment. In particular, we discuss i) the principle of upcall-driven application structuring whereby communications events are system rather than application initiated, ii) the principle of split-level system structuring whereby key system functions are carried out co-operatively between kernel and user level components and iii) the principle of decoupling of control transfer and data transfer. Under these general headings a number of particular mechanisms and techniques are discussed. Our suggestions arise from experiences in implementing a Chorus based real-time and multimedia support infrastructure within the SUMO project.

Journal ArticleDOI
TL;DR: This paper presents a concept for a general migration of nearly all operating system objects of a UNIX environment, implemented as part of the MDX operating system and believes that most of the mechanism can also apply to other message-passing based distributed operating systems.
Abstract: Load management in distributed systems is usually focused on balancing process execution and communication load. Stress on storage media and I/O-devices is considered only indirectly or disregarded. For I/O-intensive processes this imposes severe restrictions on balancing algorithms: processes have to be placed relative to fixed allocated resources. Therefore, beyond process migration, there is a need for a migration of all operating system objects, like files, pipes, timers, virtual terminals, and print jobs. In addition to new options for balancing cpu loads, this also makes it possible to balance the loads associated with these objects like storage capacity or I/O-bandwidth.This paper presents a concept for a general migration of nearly all operating system objects of a UNIX environment. The migrations of these objects work all in the same UNIX compliant and transparent manner. Objects can be moved throughout a distributed system independently of each other and at any time, according to a user defined policy. The migration mechanism is implemented as part of the MDX operating system; we present performance measurements. We believe that most of the mechanism can also apply to other message-passing based distributed operating systems.

Journal ArticleDOI
TL;DR: The process group mechanism is considered as an appropriate application structuring paradigm in such large-scale distributed systems and a formal characterization for the attribute "large scale" as applied to distributed systems is given.
Abstract: An increasing number of applications with reliability requirements are being deployed in distributed systems that span large geographic distances or manage large numbers of objects. We consider the process group mechanism as an appropriate application structuring paradigm in such large-scale distributed systems. We give a formal characterization for the attribute "large scale" as applied to distributed systems and examine the technical problems that need to be solved in making group technology scalable. Our design advocates multiple roles for group membership over a minimal set of abstractions and primitives. The design is currently being implemented on top of "off-the-shelf" technologies for both communication and computation.

Journal ArticleDOI
TL;DR: The meta object implementation of the Charlotte migration mechanism is described, which is an operation provided by every object of any type that migrates itself using an object-specific migration mechanism.
Abstract: Migration is one example of the insufficiently used potentials of distributed systems. Although migration can enhance the efficiency and the reliability of distributed systems, it is still rarely used. Two limitations contained in nearly all existing migration implementations prevent a widespread usage: migration is restricted to processes and the migration mechanism, i.e. the way state is transferred, is not adaptable to changing requirements.In our approach, migration is an operation provided by every object of any type. Triggered by higher level migration policies, the object migrates itself using an object-specific migration mechanism. Changing requirements are handled by higher level migration policies that adapt migration by exchanging the object's mechanisms.Adaptable migration was implemented within the BirliX operating system. Different migration mechanisms are accomplished by different meta objects, which can be attached to other objects. If an object has to be migrated, the meta object does the migration. Changing environmental requirements are handled by exchanging the meta object. As a result, each object has its own migration mechanism. The approach has been examined by implementing a couple of well-known migration mechanisms via meta objects. This paper describes the meta object implementation of the Charlotte migration mechanism.


Journal ArticleDOI
TL;DR: Guarded page tables help solving the sparsity problem and permit significant extensions of the current programming model without performance degradation: sparse occupation and coarse-grain pages can be handled by purely conventional hardware; fine- grain pages without fine-grain aliasing become also possible using conventional cache and TLB technology combined with stochastically colored allocation.
Abstract: To fully exploit the potential of large address spaces, e.g. 264-byte, the sparsity problem has to be solved in an efficient manner. Current address translation schemes either cause enormous space overhead (page table trees) or do not support address space structuring, object grouping and mixed page sizes (inverted page tables). Furthermore, an essential handicap of current virtual address spaces is their coarse granularity. It restricts the concept's relevance to low level OS technology. Without this constraint, mapping could be a vertically integrating paradigm, useful on all levels from hardware up to application programming.Guarded page tables help solving both problems. They permit significant extensions of the current programming model without performance degradation: sparse occupation and coarse-grain (4K) pages can be handled by purely conventional hardware; fine-grain (down to 16-byte) pages without fine-grain aliasing become also possible using conventional cache and TLB technology combined with stochastically colored allocation. Unrestricted aliasing and unlimited user level mapping without performance degradation may become possible by hardware innovation.

Journal ArticleDOI
TL;DR: In this paper, the authors explore issues in developing a better OS architecture that can fully enhance OS extensibility, and investigate how microkernel abstraction can be remodeled to support better reconfiguration in operating systems.
Abstract: The microkernel concept has once been the most advocated approach for operating system development. Unfortunately, before its publicized advantages have been fully realized in an operating system implementation, current operating system researchers claim its weaknesses and make their ways to develop "extensible" operating systems. New operating systems like SPIN, Ageis, Cache Kernel, Apertos and Scout, employ new concepts to support application specific customization and optimal allocation of system resource, in order to boost up the performance of certain applications. The microkernel concept in itself never contradicts with this purpose, as it is to provide basic efficient primitives for construction of system services. Probably, the crucial problem is that the OS architecture of most current microkernel implementations cannot suitably meet with the new requirements of extensibility. In this paper, we try to explore issues in developing a better OS architecture that can fully enhance OS extensibility. Moreover, we investigate how microkernel abstraction can be remodeled to support better reconfiguration in operating systems. To cope with the conflicting issues of efficiency, flexibility and ease of reconfiguration, we suggest and discuss an approach of structuring operating systems. The approach is characteristed by lightweight meta-abstraction mechanism and progressive reflective reconfiguration.

Journal ArticleDOI
TL;DR: The Virtual Parallel File System (VIP-FS) as discussed by the authors is a file system for high speed parallel I/O. VIP-FS uses message-passing libraries to provide a parallel and distributed file system which can execute over multiprocessor machines or heterogeneous network environments.
Abstract: In the past couple of years, significant progress has been made in the development of message-passing libraries for parallel and distributed computing, and in the area of high-speed networking. Both technologies have evolved to the point where programmers and scientists are now porting many applications previously executed exclusively on parallel machines into distributed programs for execution on more readily available networks of workstations. Such advances in computing technology have also led to a tremendous increase in the amount of data being manipulated and produced by scientific and commercial application programs. Despite their popularity, message-passing libraries only provide part of the support necessary for most high performance distributed computing applications --- support for high speed parallel I/O is still lacking.In this paper, we provide an overview of the conceptual design of a parallel and distributed I/O file system, the Virtual Parallel File System (VIP-FS), and describe its implementation. VIP-FS makes use of message-passing libraries to provide a parallel and distributed file system which can execute over multiprocessor machines or heterogeneous network environments.

Journal ArticleDOI
TL;DR: A novel approach to object-oriented frameworks, the Class Hierarchy Framework concept recapitulated in this paper, is employed in structuring components of the file system.
Abstract: This paper presents the design of an object-oriented file system which was developed as a part of the "OBJIX Object-Oriented Operating System" project. The file system is a self-contained program system which is decomposed using a standard object-oriented framework concept. A novel approach to object-oriented frameworks, the Class Hierarchy Framework concept recapitulated in this paper, is employed in structuring components of the file system. Further, this paper illustrates on an example how the file system pursues a typical system call.

Journal ArticleDOI
TL;DR: A job selection policy based on on-line predicting behaviors of jobs is proposed and it is shown that it is able to improve mean response time of jobs and resource utilization of systems substantially compared with the one without selecting job policy.
Abstract: A key issue of dynamic load balancing in a loosely couple distributed system is selecting appropriate jobs to transfer. In this paper, a job selection policy based on on-line predicting behaviors of jobs is proposed. Tracing is used at the beginning of execution of a job to predicate the approximate execution time and resource requirements of the job to make a correct decision about whether transferring the job is worthwhile. A dynamic load balancer using the job selection policy has been implemented. Experimental measurement results show that it is able to improve mean response time of jobs and resource utilization of systems substantially compared with the one without selecting job policy.

Journal ArticleDOI
TL;DR: The needs for open and flexible card life cycle enabling to accommodate executable code loaded by different service providers require a new generation of smart cards.
Abstract: Integrated circuit cards or smart cards are now well-known. Applications such as electronic purses (cash units stored in cards), subscriber identification cards used in cellular telephone or access keys for pay-TV and information highways emerge in many places with millions of users. More services are required by applications providers and card holders. Mainly, new integrated circuit cards evolve towards non-predefined multi-purpose, open and multi-user applications. Today, operating systems implemented into integrated circuit cards cannot respond to these new trends. They have evolved from simple operating systems defining an hardware abstraction level up to file management systems or database management systems where the card behavior was defined once at the manufacturing level or by the card issuer. The needs for open and flexible card life cycle enabling to accommodate executable code loaded by different service providers require a new generation of smart cards. Operating systems based on object-oriented technologies are key components for future integrated circuit cards applications.


Journal ArticleDOI
TL;DR: Following current trends in Distributed Systems the authors propose the insertion of a neural network device into the kernel of LAHNOS to reflect a selected allocation policy to provide improved system performance as long as to release the user of some cumbersome decisions.
Abstract: LAHNOS is a Local Area Heterogeneous Operating System [1] being currently developed at the Universidad Nacional de San Luis over which distributed services are to be built. This paper shows some enhancements to be introduced into the original design in order to achieve automatic allocation of remote execution requests to the best fitted node under some chosen performance criteria.Following current trends in Distributed Systems we propose the insertion of a neural network device into the kernel of LAHNOS to reflect a selected allocation policy to provide improved system performance as long as to release the user of some cumbersome decisions.

Journal ArticleDOI
TL;DR: The present paper deals with the Arena approach to the provision of pure user-level threads and the compromise between reducing policy and maintaining integrity in the HWO implementations is discussed.
Abstract: Moving resource management out of the operating system kernel facilitates a high degree of customisation. The lowest layer of the Arena system provides an abstract interface to conventional processor hardware (Mayes, 1993; Quick, 1995). The idea is to encapsulate the hardware behind an interface with certain low-level concepts which are generally applicable to any processor. Localization of hardware-dependency has the effect of increasing modularity and thus portability. This encapsulation, termed the Arena hardware object (HWO) supports portable user-level customizable resource management (Mayes et al., 1994). The aim is to remove resource management policy from the HWO whilst maintaining its integrity. The present paper deals with the Arena approach to the provision of pure user-level threads. Native implementations on Sparc and i486 processors are briefly described and performance figures are given. The compromise between reducing policy and maintaining integrity in the HWO implementations is discussed.

Journal ArticleDOI
TL;DR: This position paper suggests that object-oriented operating systems may provide the means to meet the ever-growing demands of applications and believes that the modularity that is characteristic of OO systems should provide a performance benefit rather than a penalty.
Abstract: This position paper suggests that object-oriented operating systems may provide the means to meet the ever-growing demands of applications. As an example of a successful OOOS, we cite the http daemon. To support the contention that httpd is in fact an operating system, we observe that it implements uniform naming, persistent objects and an invocation meta-protocol, specifies and implements some useful objects, and provides a framework for extensibility.We also believe that the modularity that is characteristic of OO systems should provide a performance benefit rather than a penalty. Our ongoing work in the Synthetix project at OGI is exploring the possibilities for advanced optimizations in such systems.

Journal ArticleDOI
TL;DR: The services offered by DGDBM to the programmer, the architecture of the system, the adopted solutions for distributed transaction management, the general aspects of design and implementation and the perspectives and planned extensions for this project are described.
Abstract: This paper describes a set of facilities for programming distributed transactions over replicated files which are accessed by primary key. The files are located on several computers communicated by a network. Each site has the set of GNU dbm (Gdbm) routines for local file management [15]. Above this platform we have built an interface and a set of services for distributed transaction programming. The resulting programming environment,"DGDBM", offers transparency in relation to data distribution and data replication, giving a centralized vision to the programmer. It assures the functions of management of distributed transactions like as failure recovery, mutual consistency between copies and concurrence control. DGDBM is an useful support for distributed application programming over replicated files in UNIX networks and it is available as an API (application programming interface) for the C programmer. This paper describes the services offered by DGDBM to the programmer, the architecture of the system, the adopted solutions for distributed transaction management, the general aspects of design and implementation and the perspectives and planned extensions for this project.

Journal ArticleDOI
TL;DR: Three categories of concurrent systems are outlined here; they are independent, competing and cooperating systems, and it is clear that the choice between competitive and cooperative concurrency is made by system designers.
Abstract: We assume that a system consists of components (programs, processes, tasks, threads, objects, etc.) which can be executed concurrently and which can communicate during this execution. There are a lot of sophisticated facilities to support interaction. It can exist in the forms of message passing, signals, rendezvous, variable sharing; it can occur together with the control flow movement when information is transferred between components as a set of parameters (remote procedure calls); it can be synchronous or asynchronous, etc. We will follow the generalized classification of concurrent systems that is arrived at in [1]. Three categories of these are outlined here; they are independent, competing and cooperating systems. Generally speaking, competitive concurrency exists when two or more components are designed separately and use the same system resources (which are components as well). So, the former have to compete for the latter and keep them at their disposal till there is no more need in them. Some examples of competitive concurrency are the use of OS resources, data servers, files, DBMSs, objects (buffers, stacks, mails) in concurrent object-oriented programming. Normally, components compete for a resource (server) which knows nothing about the components that can use it. It can serve any clients if it is not busy. Cooperative concurrency exists when several components cooperate, i.e. do some job together and are aware of this. They can even communicate by resource sharing, but the important thing is that they have been designed together, cooperate to achieve their joint goal and use each other's help. They synchronize their execution and c a n o ~ t for the information computed by another cooperating component. They are equal. In order to cooperate, they have to share some knowledge (of a name) which would be representative of their cooperation. This can be each other's names or the name of the information that they exchange. Some examples are: parallel computation, control systems, systolic algorithms, etc. Note that a client and its server should be regarded as a cooperating pair but two clients of a server as a competitive one. It is clear that the choice between competitive and cooperative concurrency is made by system designers. The same or a similar system can be thought of differently and designed using either concurrency. Say, a producer and a consumer can be designed as being in competitive relation if they know nothing about each other and if the only goal of the producer is to put each produced item into a store, and that of the consumer is to take it out of this store. In this case, both of them work with a lesser degree of synchronization. Obviously, they can be designed within cooperative concurrency if, for instance, the producer does not start producing the next item before it has some 'feedback' from the consumer about consuming the previous one. This assumes some information exchange between concurrent components, the coordination of their behaviour and the existence of some global predicate (which can serve as 'the joint goal' above) which could be satisfied better within a cooperative system. Note that although a communication facility can be more often used for a particular kind of concurrency, yet, generally speaking, it would be wrong to say that the latter is determined by the