scispace - formally typeset
Search or ask a question

Showing papers on "Communications protocol published in 1990"


Journal ArticleDOI
TL;DR: The architecture presented attempts to integrate communications protocols used to transport the real-time data and the distributed computing system (DCS) within which any applications using the protocols must execute in a smooth fashion.
Abstract: A multimedia communication system includes both the communication protocols used to transport the real-time data and the distributed computing system (DCS) within which any applications using the protocols must execute. The architecture presented attempts to integrate these communications protocols with the DCS in a smooth fashion in order to ease the writing of multimedia applications. Two issues are identified as being essential to the success of this integration: the synchronization of related real-time data streams, and the management of heterogeneous multimedia hardware. The synchronization problem is tackled by defining explicit synchronization properties at the presentation level and by providing control and synchronization operations within the DCS which operate in terms of these properties. The heterogeneity problems are addressed by separating the data transport semantics (protocols themselves) from the control semantics (protocol interfaces). >

294 citations


Proceedings ArticleDOI
01 Aug 1990
TL;DR: The notion of cost-sensitive communication complexity is introduced and exemplifies it on the following basic communication problems: computing a global function, network synchronization, clock synchronization, controlling protocols' worst-case execution, connected components, spanning tree, etc., constructing a minimum spanningTree, constructing a shortest path tree.
Abstract: : This paper introduces the notion of cost-sensitive communication complexity and exemplifies it on the following basic communication problems: computing a global function, network synchronization, clock synchronization, controlling protocols' worst-case execution, connected components, spanning tree, etc., constructing a minimum spanning tree, constructing a shortest path tree. (Author)

141 citations


Journal ArticleDOI
Zvi Har'el1, Robert P. Kurshan1
TL;DR: A way to develop and implement communications protocols so they are logically sound and meet stated requirements and recount the experience of an application of this methodology to develop a new session protocol at an interface of an AT&T product called the Trunk Operations Provisioning Administration System (TOPAS).
Abstract: We describe a way to develop and implement communications protocols so they are logically sound and meet stated requirements. Our methodology employs a software system called the coordination-specification analyzer (COSPAN) to facilitate logical testing (in contrast to simulation or system execution testing). Logical testing of a communications protocol is carried out on a succession of models of the protocol. Starting with a high-level model (e.g., a formal abstraction of a protocol standard), successively more refined (detailed) models are created. This succession ends with a low-level model which is in fact the code that runs the ultimate implementation of the protocol. Tests of successive models are defined not by test vectors, but by user-defined behavioral requirements appropriate to the given level of abstraction. Testing a high-level design permits early detection and correction of design errors. Successive refinement is carried out in a fashion that guarantees properties proved at one level of abstraction hold in all successive levels of abstraction. We recount the experience of an application of this methodology, employing COSPAN, to develop (analyze and implement in software) a new session protocol at an interface of an AT&T product called the Trunk Operations Provisioning Administration System (TOPAS).

120 citations


Proceedings ArticleDOI
09 Oct 1990
TL;DR: An approach to structuring fault-tolerant RT-objects in the form of active object replicas is discussed, and the effects of a failure of a task in a replica on the responsiveness of remote objects are analyzed.
Abstract: A model of a distributed real-time system which supports reasoning about the consistency and accuracy of real-time data and about the performance of real-time communication protocols is presented. The conventional object model is extended into a model of a real-time (RT-) object which incorporates a real-time clock as a mechanism for initiating an object action as a function of real time. The notion of accuracy as referring to the time gap between a state variable in the external world and its representation in a real-time computer system is adopted. The effects of the temporal uncertainties of different classes of communication protocols on the consistency and the accuracy of RT-objects are analyzed. Finally, an approach to structuring fault-tolerant RT-objects in the form of active object replicas is discussed, and the effects of a failure of a task in a replica on the responsiveness of remote objects are analyzed. >

106 citations


Journal ArticleDOI
TL;DR: NEST is particularly useful as a tool to study the performance behavior of real (or realisticly modeled) distributed systems in response to simulated complex dynamical network behaviors.
Abstract: The Network Simulation Testbed (NEST) is a graphical environment for simulation and rapid-prototyping of distributed networked systems and protocols. Designers of distributed networked systems require the ability to study the systems operations under a variety of simulated network scenarios. For example, designers of a routing protocol need to study the steady-state performance features of the mechanism as well as its dynamic response to failure of links or switching nodes. Similarly, designers of a distributed transaction processing system need to study the performance of the system under a variety of load models as well as its response to failure conditions. NEST provides a complete environment for modeling, execution and monitoring of distributed systems of arbitrary complexity.NEST is embedded within a standard UNIX environment. A user develops a simulation model of a communication network using a set of graphical tools provided by the NEST generic monitor tools. Node functions (e.g., routing protocol) as well as communication link behaviors (e.g., packet loss or delay features) are typically coded by the user in C; in theory, any high-level block-structured language could be supported for this function. These procedures provided by the user are linked with the simulated network model and executed efficiently by the NEST simulation server. The user can reconfigure the simulation scenario either through graphical interaction or under program control. The results of an execution can be graphically monitored through custom monitors, developed using NEST graphical tools.NEST may thus be used to conduct simulation studies of arbitrary distributed networked systems. However, unlike pure simulation tools, NEST may also be used as an environment for rapid prototyping of distributed systems and protocols. The actual code of the systems developed in this manner can be used at any development stage as the node functions for a simulation. The behavior of the system may be examined under a variety of simulated scenarios. For example, in the development of a routing protocol for a mobile packet radio network, it is possible to examine the speed with which the routing protocol responds to changes in the topology, the probability and expected duration of a routing loop. The actual code of the routing protocol may be embedded as node functions within NEST. The only modifications of the code will involve use of NEST calls upon the simulated network to send, receive or broadcast a message. Thus NEST is particularly useful as a tool to study the performance behavior of real (or realisticly modeled) distributed systems in response to simulated complex dynamical network behaviors. Such dynamic response is typically beyond the scope of analytical techniques restricted to model steady-state equilibrium behaviors.Traditional approaches to simulation are either language-based or model-based. Language-based approaches (e.g., Simula, Simscript) provide users with specialized programming language constructs to support modeling and simulation. The key advantage of these approaches is their generality of applications. These approaches, however, are fundamentally limited as tools to study complex distributed systems: First, they separate the tasks of modeling and simulation from those of design and development. A designer of a network protocol is required to develop the code in one environment using one language (e.g., C), while simultaneously developing a consistent simulation model (e.g., in Simscript). The distinctions between the simulation model and the actual system may be significant enough to reduce the effectiveness of simulation. This is particularly true for complex systems involving a long design cycle and significant changes. Second, these approaches require the modeler to efficiently manage the complexity of scheduling distributed system models (under arbitrary network scenarios).Model-based approaches (e.g., queuing-network simulators such as IBM's RESQ [12]) provide users with extensive collections of tools supporting a particular simulation-modeling technique. The key advantage of model-based approaches is the efficiency with which they may handle large-scale simulations by utilizing model-specific techniques (e.g., fast algorithms to solve complex queuing network models). Their key disadvantage is a narrower scope of applications and questions that they may answer. For example, it is not possible within a pure queuing-network model to model and analyze complex transient behaviors (e.g., formation of routing loops in a mobile packet radio network). The model-based approach, like the language-based approaches, suffers from having simulation/testing separated from design/development. It has the additional important disadvantage of requiring users to develop in-depth understanding of the modeling techniques. Designers of distributed database transaction systems are often unfamiliar with queuing models.NEST pursues a different approach to simulation studies: extending a networked operating system environment to support simulation modeling and efficient execution. This environment-based approach to simulation shares the generality of its modeling power with language-based approaches. NEST may be used to model arbitrary distributed interacting systems. NEST also shares with the language-based approach an internal execution architecture that accomplishes very efficient scheduling of a large number of processes. However, unlike language-based approaches, the user does not need to be concerned with management of complex simulation scheduling problems. Furthermore, NEST does not require the user to master or use a separate simulation language facility; the processes of design, development and simulation are fully integrated. The user can study the behavior of the actual system being developed (at any level of detail) under arbitrary simulated scenarios. The routing protocol designer, for example, can attach the routing protocol designed (actual code with minor adjustments) to a NEST simulation and study the system behavior. As the system changes through the design process, new simulation studies may be conducted by attaching the new code to the same simulation models. NEST can thus be used as an integral part of the design process along with other tools (e.g., for debugging).In similarity to model-based approaches, NEST is specifically targeted toward a limited scope of applications: distributed networked systems. NEST supports a built-in customizable communication network model. However, this scope has been sufficiently broad to support studies ranging from low-level communication protocols to complex distributed transaction processing systems, avionic systems and even manufacturing processes.The environment-based approach to simulation offers a few important attractions to users: 1. Simulation is integrated with the range of tools supported by the environment.The user can utilize graphics, statistical packages, debuggers and other standard tools of choice in the simulation study.Simulation can become an integral part of a standard development process.2. Users need not develop extensive new skills or knowledge to pursue simulation studies.3. Standard features of the environment can be used to enhance the range of applicability.NEST simulation is configured as a network server with monitors as clients. The client/server model permits multiple remote accesses to a shared testbed. This can be very important in supporting a large-scale multisite project.In this article we describe the architecture of NEST, illustrate its use, and describe some aspects of NEST implementation. We will also feature its design and provide examples of NEST applications.

100 citations


01 Apr 1990
TL;DR: A hardware description language called HardwareC is presented, which supports both declarative and procedural semantics, has a C-like syntax, and is extended with notion of concurrent processes, message passing, timing constraints via tagging, resource constraints, explicit instantiation of models, and template models.
Abstract: High-level synthesis is the transformation from a behavioral level specification of hardware, through a series of optimizations and translations, to an implementation in terms of logic gates and registers. The success of a high-level synthesis system is heavily dependent on how effectively the high-level language captures the ideas of the designer in a simple and understandable way. Furthermore, as system-level issues such as communication protocols and design partitioning dominate the design process, the ability to specify constraints on the timing requirements and resource utilization of a design is necessary to ensure that the design can integrate with the rest of the system. In this paper, a hardware description language called HardwareC is presented. HardwareC supports both declarative and procedural semantics, has a C-like syntax, and is extended with notion of concurrent processes, message passing, timing constraints via tagging, resource constraints, explicit instantiation of models, and template models. The language is used as the input to the Hercules High-level Synthesis System.

99 citations


Proceedings ArticleDOI
01 Aug 1990
TL;DR: A high-speed local-area network called Nectar that uses programmable communication processors as host interfaces that has implemented the TCP/IP protocol suite and Nectar-specific communication protocols on the communication processor.
Abstract: We have built a high-speed local-area network called Nectar that uses programmable communication processors as host interfaces. In contrast to most protocol engines, our communication processors have a flexible runtime system that supports multiple transport protocols as well as application-specific activities. In particular, we have implemented the TCP/IP protocol suite and Nectar-specific communication protocols on the communication processor. The Nectar network currently has 25 hosts and has been in use for over a year.The flexibility of our communication processor design does not compromise its performance. The latency of a remote procedure call between application tasks executing on two Nectar hosts is less than 500 msec. The same tasks can obtain a throughput of 28 Mbit/sec using either TCP/IP or Nectar-specific transport protocols. This throughput is limited by the VME bus that connects a host and its communication processor. Application tasks executing on two communication processors can obtain 90 Mbit/sec of the possible 100 Mbit/sec physical bandwidth using Nectar-specific transport protocols.

97 citations


Journal ArticleDOI
Gerard J. Holzmann1
TL;DR: The algorithm derived in this manner works in a fixed-size memory arena (it will never run out of memory), it is up to 2 orders of magnitude faster than the previous methods, and it has superior coverage of the state space when analyzing large protocol systems.
Abstract: This paper studies the four basic types of algorithm that, over the last 10 years, have been developed for the automated verification of the logical consistency of data communication protocols. The algorithms are compared on memory usage, CPU time requirements, and the quality of the search for errors. It is shown that the best algorithm, according to above criteria, can be improved further in a significant way, by avoiding a known performance bottleneck. The algorithm derived in this manner works in a fixed-size memory arena (it will never run out of memory), it is up to 2 orders of magnitude faster than the previous methods, and it has superior coverage of the state space when analyzing large protocol systems. The algorithm is the first for which the search efficiency (the number of states analyzed per second) does not depend on the size of the state space: there is no time penalty for analyzing very large state spaces. The practicality of the new algorithm has been tested in the verification of portions of AT&T's 5ESS® switch. The models analyzed in these tests generated up to 250 million composite system states, that could be analyzed effectively in an hour's worth of CPU time on a large mainframe computer.

60 citations


Journal ArticleDOI
TL;DR: The current SNMP and CMOT approaches to TCP/IP network management are compared from several different perspectives; comparisons are based on both theory and knowledge gained from actual implementation experiences.
Abstract: Recent network management activities in the TCP/IP community have focused on standardizing two network management protocols-Simple Network Management Protocol (SNMP) and Common Management Information Services and Protocol Over TCP/IP (CMOT)-that provide for the exchange of management information. The current SNMP and CMOT approaches to TCP/IP network management are compared from several different perspectives; comparisons are based on both theory and knowledge gained from actual implementation experiences. The current level of user and vendor acceptance for these two protocols is examined and explained, and ongoing standardization efforts are summarized. Relevant ongoing work is summarized, and trends over the next few years are discussed. >

49 citations



Proceedings ArticleDOI
03 Jun 1990
TL;DR: The primary goal of the Axon architecture is to support a high-performance data path delivering VHSI (very high speed internetwork) bandwidth directly to applications.
Abstract: The primary goal of the Axon architecture is to support a high-performance data path delivering VHSI (very high speed internetwork) bandwidth directly to applications. The significant features of Axon are: (1) an integrated design of host and network interface architecture, operating systems, and communication protocols; (2) a network virtual storage facility which includes support for virtual shared memory on loosely coupled systems; (3) a high-performance, lightweight object transport facility which can be used by both message-passing and shared-memory mechanisms; and (4) a pipelined network interface which can provide a path directly between the VHSI and host memory. >

Journal ArticleDOI
TL;DR: An approach for automated modeling and verification of communication protocols is presented, and a language that specifies the input/output behavior of protocol entities is introduced as the starting point of the approach; verification of the linguistic specifications is discussed.
Abstract: An approach for automated modeling and verification of communication protocols is presented. A language that specifies the input/output behavior of protocol entities is introduced as the starting point of the approach, and verification of the linguistic specifications is discussed. Rules for conversion of the specifications into a Petri net model (based on a timed Petri net) are presented and illustrated by examples. This leads to a second level of verification on the net model. The approach is illustrated by its application to a part of the LAPD protocol. >

Journal ArticleDOI
TL;DR: An outlook to available methods and tools for partially automating the activities during this cycle of the general protocol and software development life cycle is given, and ongoing research directions are discussed.
Abstract: The collection of Open Systems Interconnection (OSI) standards are intended to allow the connection of heterogeneous computer systems for a variety of applications. In this context, the protocol specifications are of particular importance, since they represent the standards which are the basis for the implementation and testing of compatible OSI systems. This paper has been written as a tutorial on questions related to protocol specifications. It provides certain basic definitions related to protocol specifications and specification languages. Special attention is given to the specification formalisms used for OSI protocol and service descriptions, including semi-formal languages such as state tables, ASN.1 and TTCN, and formal description techniques (FDTs) such as Estelle, LOTOS, and SDL. The presentation is placed within the context of the general protocol and software development life cycle. An outlook to available methods and tools for partially automating the activities during this cycle is given, and ongoing research directions are discussed.

Proceedings ArticleDOI
02 Dec 1990
TL;DR: A study to verify the speed and routing performance of a newly developed self-healing network (SHN) restoration technique showed that the protocol reliably performs multiple successively shortest paths rerouting of whole cable cuts in under 2.5 s, and acts like a conventional span protection switching system in about 40 ms.
Abstract: Telecom Canada commissioned a study to verify the speed and routing performance of a newly developed self-healing network (SHN) restoration technique. The particular SHN protocol is based on a unique paradigm for rapid, distributed, physical-layer interaction among digital cross-connect system (DCS) machines. An implementation of the protocol, as it would be installed in a DCS machine, was executed concurrently in every node of models of the Canadian transcontinental network, using an asynchronous network emulator. Results from nearly 1800 span-cutting experiments in networks of up to 93 nodes, 157 spans, and 6000 DS-3s showed that the protocol reliably performs multiple successively shortest paths rerouting of whole cable cuts in under 2.5 s, and acts like a conventional span protection switching system in about 40 ms. Restoration behaviour with multiple simultaneous faults and time concatenated faults has also been verified. >

Journal ArticleDOI
TL;DR: The remaining problem of conversion between the incompatible communication protocols used in the different systems can be solved automatically, as demonstrated for the case of a simple example of data transmission service from a sender to a receiver process.
Abstract: Gateways are introduced for interworking between several, possibly heterogeneous, distributed computer systems. A gateway has to provide for the necessary adaptation between the communication protocols used in the interconnected networks. The adaptation problem is best handled by considering the communication services of the interconnected systems. Once the problem is solved at this level, the remaining problem of conversion between the incompatible communication protocols used in the different systems can be solved automatically, as demonstrated for the case of a simple example of data transmission service from a sender to a receiver process. >

Journal ArticleDOI
TL;DR: In this article, the authors present a semi-automatic implementation of communication protocols in the Reference Model of the International Organization for Standardization (ISO) for Open Systems Interconnection (OSI).
Abstract: This paper discusses semi-automatic implementation of communication protocols in the Reference Model of the International Organization for Standardization (ISO) for Open Systems Interconnection (OSI). The semi-automatic code generation techniques produce high-level language code (C, Pascal, etc.) from formal descriptions or protocol specifications. A survey is given of different approaches to semi-automatic code generation. As an example, we present a protocol in the ISO protocol specification technique Estelle. We show the code generated by the Estelle Development System (EDS) and sample output from the generated implementation.

Proceedings ArticleDOI
01 Mar 1990
TL;DR: An approach to the problem of systematic development of applications requiring access to multiple and heterogeneous hardware and software systems is presented, based on a common communication and data exchange protocol that uses local access managers to protect the autonomy of member software systems.
Abstract: An approach to the problem of systematic development of applications requiring access to multiple and heterogeneous hardware and software systems is presented. The approach is based on a common communication and data exchange protocol that uses local access managers to protect the autonomy of member software systems. The solution is modular and can be implemented in a heterogeneous hardware and software environment using different operating systems and different network protocols. The design of the system, its major components, and its prototype implementation are described. Particular emphasis is placed on the distributed operation language (DOL), used to specify invocation, synchronization, and data exchange between various software and hardware components of a distributed system. >


Patent
27 Apr 1990
TL;DR: In this article, the authors propose a protocol for exchanging data and control information between stations arranged in a daisy-chain wide-area network (WAN) and a local-area-network (LAN).
Abstract: A network topography and associated protocol for exchanging data and control information between stations arranged in a daisy-chain wide-area-network (WAN) and a local-area-network (LAN). The protocol includes a circuit layer which routes messages between WAN stations, and a slot layer which routes slots between LAN channels and WAN devices. The protocol uses virtual circuits to route messages between devices. Broadcast messages distribute control information to all circuits. A circuit-specific control message allows passing control information to each circuit. Virtual circuits are established with a broadcast circuit-connect message and disconnected with a broadcast circuit-disconnect message. The maximum message size is regulated in accordance with the transmission speed of links over which a message must travel.

Journal ArticleDOI
TL;DR: In the early 1980's, research and development was initiated on methods to test the developing International Standards Organization's Open Systems Interconnection (ISO/OSI) communications protocols as mentioned in this paper.
Abstract: In the early 1980's, research and development was initiated on methods to test the developing International Standards Organization's Open Systems Interconnection (ISO/OSI) communications protocols. In 1983, ISO initiated work to develop standardized test methods. A tutorial overview of the test methods and test notation named TTCN which were developed within ISO are presented. Issues regarding multilayer test methods are still not completely resolved. These issues are identified and alternatives to ISO's methods are explored. Application of the formal description techniques named ASN.1 and Estelle to multi-layer test systems is illustrated. Concluding remarks summarize the status of current practice in conformance testing and status of evolving OSI testing standards.

Journal ArticleDOI
TL;DR: The performance of the DRCS system has been closely monitored, and the results of this monitoring will be used to provide ideas for improvements which will be incorporated into version 2 of the system.
Abstract: This paper describes a distributed revision control system (DRCS) that is suitable for use in wide area networks. A selective amount of replication is used to improve performance. The system was developed as an extension to an existing revision control system (RCS). DRCS runs on various versions of the UNIX* system. It uses the UUCP communication protocol, but it can be easily adapted to use another communications protocol. The system has been used as a tool to control the source files for a document that is being jointly authored by two persons who are geographically separated by over 200 km. The performance of the system has been closely monitored, and the results of this monitoring will be used to provide ideas for improvements which will be incorporated into version 2 of the system

Journal ArticleDOI
TL;DR: This tutorial focuses on methods to show that a proposed communication protocol meets its specification by proving properties, and which offer support for the design of correct specifications of OSI protocols.
Abstract: This is a tutorial on formal methods for verification of communication protocols. We focus on methods to show that a proposed communication protocol meets its specification by proving properties, and which offer support for the design of correct specifications of OSI protocols. Emphasis is put on methods where algorithms are available and can be integrated in protocol design tools. This kind of support is especially useful in the early phases of protocol design, to analyse basic properties. The methods can be used in the process of producing detailed specifications in any of the standardised formal description techniques: Estelle, LOTOS and SDL.

Journal ArticleDOI
TL;DR: The reliability of general systems using dynamic and static redundancy schemes is derived, and communication protocols are considered as a representative example, and it is shown that static redundancy yields a more reliable system than dynamic redundancy.
Abstract: The reliability of general systems using dynamic and static redundancy schemes is derived, and communication protocols are considered as a representative example. The system reliability for three broadcast protocols using various redundancy-allocation policies is studied. The analytic and simulation results show that, in some cases, static redundancy yields a more reliable system than dynamic redundancy. This is essential for distributed system applications. In some cases, the failure detection time is substantial, so that the hardware reliability and hence the system reliability are adversely affected when using dynamic redundancy. This can be a critical factor for distributed system applications, because a large overhead of communication can be required for error detection. In these cases, unreliable protocols can provide better system reliability than reliable protocols, especially when the communication network is highly reliable and when the machine failure rate is relatively large. Since unreliable protocols generate less load and less resource contention, they are preferable in such cases. The reliability should be analyzed to determine the optimal balance between reliable and unreliable protocols. Static redundancy can be more reliable than dynamic redundancy if the failure-detection time is large. >

Journal ArticleDOI
TL;DR: Key features of LANSF at the syntactic level are presented, comment informally on the semantics of these features, and some implementation issues are highlighted.
Abstract: LANSF is a software package that was originally designed as a tool to investigate the behaviour of medium access control (MAC) level protocols. These protocols form an interesting class of distributed computations: timing of events is the key factor in them. The protocol definition language of LANSF is based on C, and protocols are specified (programmed) as collections of communicating, interrupt-driven processes. These specifications are executable: an event-driven emulator of MAC-level communication phenomena forms the foundation of the implementation. Some tools for debugging, testing, and validation of protocol specifications are provided. We present key features of LANSF at the syntactic level, comment informally on the semantics of these features, and highlight some implementation issues. A complete example of a LANSF application is discussed in the Appendix.

Patent
19 Jan 1990
TL;DR: In this paper, a communication interface for decoupling one software application from another software application such communications between applications are facilitated and applications may be developed in modularized fashion, the communication interface is comprised of two libraries of programs.
Abstract: A communication interface for decoupling one software application from another software application such communications between applications are facilitated and applications may be developed in modularized fashion. The communication interface is comprised of two libraries of programs. One library manages self-describing forms which contain actual data to be exchanged as well as type information regarding data format and class definition that contain semantic information. Another library man­ages communications and includes a subject mapper to receive subscription requests regarding a particular subject and map them to particular communication disciplines and to particular services supplying this information. A number of communication disciplines also cooperate with the subject mapper or directly with client applications to manage communications with various other applications using the communication protocols used by those other applications.

Journal ArticleDOI
TL;DR: An approach to a communication model that can be viewed as a special form of a model for open distributed applications is outlined and the functions of a system for the handling of multimedia information and a functional model approach, including a supporting environment, are considered.
Abstract: Requirements for distributed processing of multimedia information are summarized and compared with the latest efforts in standardization. An approach to a communication model that can be viewed as a special form of a model for open distributed applications is outlined. The functions of a system for the handling of multimedia information and a functional model approach, including a supporting environment, are considered. Synchronization aspects of isochronous and anisochronous communication are also outlined. The model is still under development. Detailed work needs to be done in the areas of formal specification of abstract services and modeling of communication protocols. >

Journal ArticleDOI
TL;DR: The author explores methods for transition and coexistence between the two protocol suites of Internet/OSI and enumerates several approaches, discusses the positive and negative aspects of each, and describes their inter-relationships.
Abstract: The US DoD (Department of Defense) Internet suite of protocols (commonly known as TCP/IP for transmission control protocol/internet protocol) is the de facto open (nonproprietary) standard for computer communications in multivendor and multiadministration networks. However, some feel that protocols based on the open systems interconnection (OSI) model and promulgated by the International Organization for Standardization (ISO) will eventually achieve dominance and enjoy even greater success than TCP/IP. The author explores methods for transition and coexistence between the two protocol suites. He enumerates several approaches, discusses the positive and negative aspects of each, and describes their inter-relationships. Further, although the focus is on the problems of Internet/OSI transition and coexistence, none of the approaches described are unique to this problem. Rather, they are all general solutions to the problem of changing from one protocol suite to another or of having two arbitrary protocol suites coexisting. >

Journal ArticleDOI
TL;DR: A comprehensive CEBus development system intended to solve common problems associated with home automation is described, which provides a complete development environment which allows an engineer to design, debug, and test devices communicating over a CEBus network.
Abstract: A comprehensive CEBus development system intended to solve common problems associated with home automation is described. The system provides a complete development environment which allows an engineer to design, debug, and test devices communicating over a CEBus network. Fully configurable modules are definable to specific user needs in order to verify many new applications and scenarios. By using the system, the engineer can concentrate on the design of the overall product and system-end-user interface and not be concerned about the details of the communication protocol standard. Furthermore, the system will be able to provide a common communication interface, so the engineer will not need a different design for each medium (such as powerline, twisted pair, infrared, etc.). >

Proceedings ArticleDOI
R.E. Strom1
11 Oct 1990
TL;DR: The distinctive features of Hermes are processes as the basic units of modularity and interaction, ports as capabilities, a representation-independent pointerless type system, and compile-time checking which enforces protection on the granularity of a module.
Abstract: Hermes is an experimental language for implementing complex systems and distributed applications. It conceals low-level programming details, such as data representation, distribution, communications protocols, and operating system calls, while retaining expressiveness, checkability, and efficiency. Hermes supports multiple interacting applications and services within a single environment. Applications and services interact by making calls and passing typed parameters-exactly the same way modules interact within an application. The syntax and semantics of interaction are uniform, regardless of whether the interacting components are local or remote and whether they belong to the same user or to different users. The distinctive features of Hermes are processes as the basic units of modularity and interaction, ports as capabilities, a representation-independent pointerless type system, and compile-time checking which enforces protection on the granularity of a module. The concept of a multiapplication environment is discussed. >

Journal ArticleDOI
TL;DR: Two vendor-independent network protocols that have risen to the top of the list of contenders for the title of 'the' standard are described and compared and it is concluded that the OSI model is better structured than TCP/IP.
Abstract: Two vendor-independent network protocols that have risen to the top of the list of contenders for the title of 'the' standard are described and compared. They are the transmission control protocol/internet protocol (TCP/IP), which was developed in the late 1960s as a research project conducted by the US Department of Defense, and the open systems interconnection reference model (OSIRM, or simply OSI), developed by the International Organization for Standardization (ISO) in the mid-1970s. A brief discussion of the various types of physical networks is given as background. The basic structure of the two protocols and the way they go about achieving their respective ends are examined. It is concluded that the OSI model is better structured than TCP/IP. However, TCP/IP is the older of the two protocols and has had time to develop a substantial user base, especially in the Unix community, where it has already become an unspoken standard and clearly dominates the market. >