scispace - formally typeset
Search or ask a question

Showing papers on "Communications protocol published in 1986"


Journal ArticleDOI
TL;DR: Two selection protols that run on multiple access channels in log-logarithmic expected time are proposed and a complementary lower bound is established showing that the first protocols falls within an additive constant of optimality and that the second differs from optimality by less than any multiplicative factor infinitesimally greater than 1 as the size of the problem approaches infinity.
Abstract: We propose two selection protols that run on multiple access channels in log-logarithmic expected time, and establish a complementary lower bound showing that the first protocols falls within an additive constant of optimality and that the second differs from optimality by less than any multiplicative factor infinitesimally greater than 1 as the size of the problem approaches infinity. It is difficult to second-guess the fast-changing electronics industry, but our mathematical analysis could be relevant outside the traditional interests of communications protocols to semaphore-like problems.

211 citations


Proceedings ArticleDOI
01 May 1986
TL;DR: The FDDI Token Ring Protocol controls communication over fiber optic rings with transmission rates in the range of 100 megabits per second, intended to give guaranteed response to time-critical messages by using a "timed token" protocol.
Abstract: Communication technology now makes it possible to support high data transmission rates at relatively low cost. In particular, optical fiber can be used as the medium in local area networks with data rates in the range of 100 megabits per second. Unfortunately, local area network topologies and communication protocols that work well with lower speed media are not necessarily appropriate when the data transmission rate is scaled up by approximately an order of magnitude. Recognizing this fact, an ANSI sub-committee (ANSIX3T9) has been working for the past two years on a proposed standard for a token ring protocol tailored to a transmission medium with transmission rate in the 100 megabits per second range. The protocol is referred to as the FDDI (Fiber Distributed Data Interface) Token Ring protocol. The proposal for the standard is now quite mature and nearly stable.

160 citations


Journal ArticleDOI
TL;DR: Map development programs have evolved to the point of making products available for network implementation, and users should begin planning for MAP or inquiring about MAP products from computer and communications suppliers.
Abstract: The description is given of the Manufacturing Automation Protocol (MAP), a set of communication protocols based on a bus configuration, in which network stations are wired together with a common bus so all can receive all messages. Any station, when sending a message, designates in the address portion of that message where it should go. All other stations receive the transmitted message and examine its address field (a code for the particular station to which the message is being sent). If a station decodes its own address, it takes the appropriate action; otherwise it ignores the message. MAP development programs have evolved to the point of making products available for network implementation, and users should begin planning for MAP or inquiring about MAP products from computer and communications suppliers. The first plant-wide implementation of MAP at General Motors will take place in 1986 at five truck and bus plants and at a factory-of-the-future project in Saginaw, Michigan.

74 citations


Patent
23 May 1986
TL;DR: In this paper, a multi-port communications controller and variable protocol adapter is described, which utilizes a user programmable pluggable programming cartridge for defining individual communications port data service characteristics.
Abstract: A multi-port communications controller and variable protocol adapter is described. The adapter utilizes a user programmable pluggable programming cartridge for defining individual communications port data service characteristics. The port data service characteristics are interpreted by a microprocessor which manages the interchange from port to port and to or from memory or a host system. Direct memory access or interrupt driven memory access modes of operation are individually selectable for each individual in bound and out bound communications channel. The communications protocols employed at each port may be of any standard type with the microprocessor in the adapter making the appropriate conversion. Communication speeds can be automatically recognized and matched for each port also. An arbitration processor for both DMA and interrupt driven data transfer services is included as the heart of the communications adapter design to provide the capability of individualized control over each in bound and out bound channel's mode of data transfer service for the optimum mode of operation for each port and type of data service required.

64 citations



Journal ArticleDOI
TL;DR: This work defines three graduated notions of fair progress (namely, node fairness, edge fairness, and network fairness) and defines three corresponding degrees of node liveness, and uses these techniques to establish the liveness of some practical communication protocols.
Abstract: Consider a network of communicating finite state machines that exchange messages over unbounded FIFO channels. Each machine in the network can be defined by a directed graph whose nodes represent the machine states and whose edges represent its transitions. In general, for a node in one of the machines to be live (i.e., encountered infinitely often during the course of communication), each machine in the network should progress in some fair fashion. We define three graduated notions of fair progress (namely, node fairness, edge fairness, and network fairness), and on this basis we define three corresponding degrees of node liveness. We discuss techniques to verify that a given node is live under each of these fairness assumptions. These techniques can be automated; and they are effective even if the network under consideration has an infinite number of reachable states. We use our techniques to establish the liveness of some practical communication protocols; these include an unbounded start-stop protocol, an unbounded alternating bit protocol, and a simplified version of the CSMA/CD protocol for local area networks.

50 citations


Journal ArticleDOI
TL;DR: An automated technique for protocol development is discussed along with its application to the specification, verification, and semiautomatic implementation of an authentication protocol for computer networks.
Abstract: An automated technique for protocol development is discussed along with its application to the specification, verification, and semiautomatic implementation of an authentication protocol for computer networks. An overview is given of the specification language, implementation method, and software tools used with this technique. The authentication protocol is described, along with an example of its operation. The reachability analysis technique for the verification of some protocol properties is discussed, and protocol verification software that uses this technique is described. The results of mechanical verification of some properties of this protocol are presented with a partial implementation generated automatically from the protocol specification.

49 citations


Journal ArticleDOI
TL;DR: In this paper, the authors suggest an interoperability approach for the transition from the ARPA Internet community to the Open System Interconnection (OSI) for the purpose of providing higher level services (i.e., session and above) without disrupting existing facilities.
Abstract: The ARPA Internet community has a well-developed, mature set of transport and internetwork protocols, which are quite successful in offering network and transport services to end-users. The protocols provide “open system interconnection” (OSI) for users, and a robust competitive market is supplied by a growing number of vendors. The CCITT and ISO have defined various network, transport, session, presentation, and application recommendations and standards which have been adopted by the international community and numerous vendors. To the largest extent possible, it is desirable to offer the higher level services (i.e., session and above) in networks based on the ARPA protocols, without disrupting existing facilities. It is also desirable to develop expertise with ISO and CCITT protocols while utilizing a robust set of transport services. Finally, given the probable, long-term dominance of the international standards, it is desirable to design a graceful transition strategy from network based on the ARPA protocols to OSI-based networks. In this paper, we suggest an exercise in interoperability which achieves all three of these goals. Our solution is practical in the sense that it: can be easily implemented in a cost-effective manner in the short-term; permits application-designers to work with an OSI applications-framework, while utilizing existing robust transport and network protocols; and, provides the basis for a far-reaching migration strategy to the OSI protocols.

31 citations


Proceedings ArticleDOI
01 Oct 1986
TL;DR: It is illustrated how a building-block approach to the design of protocols can lead to better understanding and clarity of the protocols, and even to improvement in performance.
Abstract: Although it is common practice to design large systems by decomposition, it does not seem to be the case in the design of "small" distributed protocols, especially when the network is assumed reliable Many such protocols share common features, yet are designed and proved as isolated cases, which makes them difficult to understand and even more difficult to validate In this paper, a building-block approach to the design of distributed protocols is advocated We take a few simple and rather basic protocols proposed by various researchers, and assert that these protocols are suitable to serve as building blocks We then go through the exercise of designing several protocols employing these building blocks We illustrate how such a design can lead to better understanding and clarity of the protocols, and even to improvement in performance

26 citations


Patent
13 Aug 1986
TL;DR: A data processing system includes a host processor and a plurality of controllers (26) which are coupled to a common communication channel (22) and which are arranged to control the transfert of data between the host processor as discussed by the authors, where a look-up table containing sets of instructions corresponding to a number of different communication protocols for use with the remote processing devices is stored.
Abstract: A data processing system includes a host processor and a plurality of controllers (26) which are coupled to a common communication channel (22) and which are arranged to control the transfert of data between the host processor and a plurality of remote processing devices The system also includes means for storing a look-up table containing sets of instructions corresponding to a number of different communication protocols for use with the remote processing devices Switching means (84) located in each controller (26) provides an address for the controller During a power-up operation, the address of a controller (26) associated with a selected remote processing device is transmitted to the host processor for use in addressing the look-up table to obtain a communication protocol for the selected processing device, and this information is then loaded into the controller (26) for controlling the transfer of data between the host processor and the selected processing device

26 citations


Journal ArticleDOI
TL;DR: In this paper, a highly automated approach is proposed for testing the consistency of distinct representations of identical system functionality, based on dynamic comparison and analysis of observable behaviors presented by system functionality representations given at different levels of abstraction.
Abstract: A highly automated approach is proposed for testing the consistency of distinct representations of identical system functionality. This approach is based on dynamic comparison and analysis of observable behaviors presented by system functionality representations given at different levels of abstraction. These representations are encoded in a relatively mechanical way as procedures in sequential PROLOG and thus provide the capability of generating and checking the system functionality they capture. The approach is extremely flexible, straightforward to use, and particularly appropriate to systems whose externally observable behavior can be modeled by finite state automata. The use of this approach is illustrated in the context of OSI communication protocol design and validation.

01 Jan 1986
TL;DR: Some distributed networks rotocols proposed with claims that they operate correctly in the face of changing topology, without need for unbounded numbers to identify different runs of the algorithms, do not possess all the claimed properties.
Abstract: A number of distributed networks rotocols for reliable data transmission, connectivity test ortest path and topology broadcast have been proposed with claims that they operate correctly in the face of changing topology, without need for unbounded numbers to identify different runs of the algorithms. This paper shows that they do not possess all the claimed properties. However some of them can be modified so that that their correct operation can be demonstrated, at a cost of longer running time and of higher communication complexity.

Journal ArticleDOI
01 Aug 1986
TL;DR: This paper presents a general Petri net reduction algorithm that reduces the number of states while preserving all desirable and undesirable properties and demonstrates the usefulness of this algorithm by applying it to the state exploration in protocol synthesis.
Abstract: Petri net is a powerful model for analyzing communication protocols because they share many common properties. Currently, protocol analysis suffers the state explosion problem especially for error-recoverable protocols and multi-party protocols. Protocol synthesis relieves this problem by generating new and complicated protocols from simple subsets of the protocol models. Reduction analysis provides theoretical ground for correct synthesis or expansion. Thus, reduction is a very important research area. In this paper, we present a general Petri net reduction algorithm that reduces the number of states while preserving all desirable and undesirable properties. To the best of our knowledge, this is the first general Petri net reduction algorithm for protocol analysis. We first present and extend Dong's [DON 83] definition of WBMs to include more subnets as WBMs. To render the reductions automated, a new concept of simple well-behaved modules (SWBMs) is introduced. Recursively performing reductions of SWBMs, complicated WBMs can be reduced. A main program is written to implement this recursive procedure. The problem is then reduced to finding conditions for SWBMs. We do this by progressing from simpler SWBMs to more complicated ones, i.e., from single-arc ones to multi-arcs ones. Finally, we demonstrate the usefulness of this algorithm by applying it to the state exploration in protocol synthesis. Other applications such as error detection, performance evaluation, and software engineering will be discussed in future.

Proceedings ArticleDOI
05 May 1986
TL;DR: After interesting results had been obtained using a simulation model for a layered storage structure, it was decided to apply simulation also to this problem and effects of network topology, access protocols and buffering strategies will be tested.
Abstract: The desirability of an integrated (digital) communication system for medical images is widely accepted. In the USA and in Europe several experimental projects are in progress to realise (a part of) such a system. Among these is the IMAGIS project in the Netherlands. From the conclusions of the preliminary studies performed, some requirements can be formulated such a system should meet in order to be accepted by its users. 1. The storage resolution of the images should match the maximum resolution of the presently acquired digital images. This determines the amount of data and therefore the storage requirements. 2. The desired images should be there when needed. This time constraint determines the speed requirements to be imposed on the system. As compared to current standards, very large storage capacities and very fast communication media are needed to meet these requirements. By employing cacheing techniques and suitable data compression schemes for the storage and by carefully choosing the network protocols, bare capacity demands can be alleviated. A communication network is needed to make the imaging system available over a larger area. As the network is very likely to become a major bottleneck for system performance, effects of variation of various attributes have to be carefully studied and analysed. After interesting results had been obtained (although preliminary) using a simulation model for a layered storage structure, it was decided to apply simulation also to this problem. Effects of network topology, access protocols and buffering strategies will be tested. Changes in performance resulting from changes in various network parameters will be studied. Results of this study at its present state are presented.

Journal Article
TL;DR: In this paper, the Alternating Bit Protocol (ABP) is verified by means of process algebra, and a notion of "modules" is introduced and the protocol is divided in two such modules.
Abstract: A version of the Alternating Bit Protocol is verified by means of Process Algebra. To avoid a combinatorial explosion, a notion of "modules" is introduced and the protocol is divided in two such modules. A method is developed for verifying conglomerates of modules and applied to the motivating example.

Journal ArticleDOI
01 Jan 1986-Infor
TL;DR: It is shown that the behaviour of inhibitor free-choice Petri nets with deterministic firing times can be represented by probabilistic state graphs, and stationary descriptions can be obtained by standard techniques used for analysis of Markov chains.
Abstract: It is shown that the behaviour of inhibitor free-choice Petri nets with deterministic firing times can be represented by probabilistic state graphs. For bounded petri nets the corresponding state graphs are finite, and stationary descriptions can thus be obtained by standard techniques used for analysis of Markov chains. An immediate application of such a model is jseri^ormance analysis of systems of asynchronous concurrent processes and, in particular, communicatioii protocols. Places of Petri nets model queues of messages, transitions represent events in communication networks, inhibitor arcs are used to indicate priorities of simultaneous events, and probabilities associated with free-choice classes correspond to relative frequencies of random events. The alternating bit protocol is used as an illustration of analysis.

Journal ArticleDOI
J R Agre1
01 Aug 1986
TL;DR: A new diagnostic message protocol that provides fault diagnosis capabilities for the communications in a distributed system environment is described, which uses special messages to determine the system fault state.
Abstract: A new diagnostic message protocol that provides fault diagnosis capabilities for the communications in a distributed system environment is described. The protocol is designed to operate in conjunction with a standard end-to-end communication protocol and uses special messages to determine the system fault state. A diagnosis message is represented using a test dependency model that is derived from the system topology. These messages are used by an adaptive strategy designed to achieve specific objectives such as reduced testing cost. Using the test dependency model, a general purpose algorithm is developed for generating these strategies based on an information theory criterion. Specific properties of the protocol are discussed, and several examples of strategies for a distributed system topology are provided.

Journal ArticleDOI
Frank J. Nagy1
TL;DR: ACNET, Accelerator Control NETwork, is a network protocol designed to provide high performance in data acquisition from and process control of the accelerator complex.
Abstract: The Fermilab Accelerator Control System is a highly distributed computer-based control system. The accelerator complex is controlled from widely separated points by a system of three VAX-11/785s, twenty-three PDP-11s of several types and nearly one thousand microprocessors. The host system of VAXs and PDP-11s is centrally located and interconnected by three PCL11-B parallel communication buses and networking software developed at Fermilab. ACNET, Accelerator Control NETwork, is a network protocol designed to provide high performance in data acquisition from and process control of the accelerator complex. This paper presents a description of the ACNET protocol and a discussion of its implementation and performance in the currently operating control system.


Book ChapterDOI
Harry Rudin1
18 Aug 1986
TL;DR: The state-of-the art of formal description techniques for computer-communication protocols is summarized and a recent result wherein the distribution of response time is determined direct from the formal specification is presented.
Abstract: The importance — and even the necessity — of formal description techniques for computer-communication protocols is now widely accepted. A dimension usually neglected in formal protocol specification is that of time. When the dimension of time is included, protocols can be validated, including their timeout specifications, and performance (e.g., throughput and/or response time) can be estimated, direct from the formal specification. Here, the state-of-the art is summarized and a recent result wherein the distribution of response time is determined direct from the formal specification is presented. An extensive bibliography is included.

01 Jan 1986
TL;DR: An approach to designing multiple co-ordinated expert systems wherein each expert system models the others as abstract remote data resource management agents, in the form of LISP-compatible language extensions that supply agenda-based message receipt processing, referentially transparent interagent message routing, and database primitives that are specialized for managing hypotheses in both local and remote databases.
Abstract: There are two fundamental prerequisites to the effective design and construction of distributed knowledge-based systems. First, there must be a supply of tools available that directly support the design and implementation task at as high a conceptual level as is possible. Second, there must be a design metaphor that motivates and directs the productive use of those tools. Unfortunately, the demand for working distributed expert systems currently far outstrips the supply of cleanly designed tools to support their construction, and existing design paradigms for both expert systems and distributed systems are taxed to, or even beyond, their limits in the service of distributed knowledge-based system construction. What is needed, therefore, is a set of tools that permit the smooth integration of technology from both knowledge-based and distributed systems design, and a model of how to use them. We present an approach to designing multiple co-ordinated expert systems wherein each expert system models the others as abstract remote data resource management agents. Agents negotiate access to remote resources (including data) by asynchronously passing messages to the manager of the desired resources. The agents are embedded in a Hyper Star topology that offers the benefits of a hierarchical star but permits special-purpose interagent communications links ( Hyper Links ) to be established to meet performance criteria such as timely data transfer or use of special data communications protocols. This approach has been implemented in the form of LISP-compatible language extensions that supply agenda-based message receipt processing, referentially transparent interagent message routing, resource allocation based on cost/benefit analysis, and database primitives that are specialized for managing hypotheses in both local and remote databases. It has been used in the design or implementation of several demonstrated, working distributed knowledge-based systems. We discuss at length the specific language extensions that support two important aspects of this model: management of remote resources and management of distributed hypothesis databases.

John R. Rice1
01 Jan 1986
TL;DR: The multi-FLEX machines use FLEX/32 multi-computer modules for larger machines that provide wide flexibility and scalability in applications and no use of network protocols for communication is made.
Abstract: The multi-FLEX machines use FLEX/32 multi-computer modules for larger machines. These machines provide wide flexibility and scalability. the innovative ingredients of the design are: (a) use of locally shared memory in addition to local and global memory, (b) very high internal and external bandwidth, (c) no use of network prolocals for communication, Cd) distribution of external I/O throughout the machine. The 64 node FLEX-<:ube would have 1500 processors and 700 megawords of memory, operate at 2.5 gigaflops and have external I/O bandwidth of 4 Gigabytes/second. It is conjectured that it could service 40,000 tenninals or service one job to do weather forecasts 1000 times faster than real time (covering the entire nonhern hemisphere with about 500 million unknowns.) •Supported in part by Air Foroe OffiC& of Scientific RllSClIrcl1 gnml AFOSR-84-0385 and AlTTIY Resea.rcl1 Office conlT1lCl = 2I. RATIONALE FOR THE MULTI·FLEX MACHINES This main rationale for the class of machines described here is to provide wide flexibility and scalability in applications. The FLEX-cube described can, it is conjectured, either do a weather forecast 1000 times faster than real time for the northern hemisphere or service 40,000 terminals (it is a 1500 processor, 2.5 Gigaflops machine). The innovative ingredients of this design are a) Memory heirnrchies of local, locally shared and global (there might be more than one level of locally shared memory) b) Very high internal and intermodule communication bandwidth. c) No use of network protocols for communication. d) External 110 distributed throughout the machine. The design is based on the existing commercial product, the FLEX/32 of Flexible Computer Corp. Other desirable properties of this design are a high level of modularity. fault tolerance and the use of ordinary packaging. At this point this is entirely a "paper" machine. However, real machines of this type can be constructed quickly (given the money) using existing machines. The main barrier to the use of the multi-FLEX machines is in the software systems. However, the situation here is no better nor worse than for other designs. IL THE FLEX/32 MODULE The multi-FLEX machines are built with the existing FLEX/32 machines as modules. We refer to [FLEX 86] for details of this machine, its characteristics are summarized here. Figure 1 shows a block diagram of the machine. For concreteness, we

Proceedings ArticleDOI
01 Nov 1986
TL;DR: A polynomial-time algorithm is given for determining whether a specification is satisfiable; the algorithm can actually construct a protocol that achieves the specified exchange of information, optimized with respect to two possible criteria.
Abstract: We define a notation (specification language) for describing desired patterns of communication among components of a distributed system through multiport, unreliable channels. Our language specifies the network topology, and the kinds of information transmission desired. We give a polynomial-time algorithm for determining whether a specification is satisfiable; our algorithm can actually construct a protocol that achieves the specified exchange of information, optimized with respect to two possible criteria. Examples suggest that our method can automatically synthesize reasonably complex protocols.

Journal ArticleDOI
TL;DR: The results of a study to develop the functional concepts for an intelligent optical node incorporating as many optical signal processing elements as feasible are outlined.
Abstract: In the development of future data management systems, such as the NASA Space Station, a major problem represents the design and implementation of a high performance communication network which is self-correcting and repairing, flexible, and evolvable. To obtain the goal of designing such a network, it will be essential to incorporate distributed adaptive network control techniques. The present paper provides an outline of the functional and communication network requirements for the Space Station data management system. Attention is given to the mathematical representation of the operations being carried out to provide the required functionality at each layer of communication protocol on the model. The possible implementation of specific communication functions in optics is also considered.

Journal ArticleDOI
TL;DR: The MAP initiative is aimed at standardising communication protocols, and aims to standardise communication protocols in industrial automation systems where many different languages are used.
Abstract: Confusion of language not only applies to humans, but also to the world of industrial automation systems, where many different languages are used. The MAP initiative is aimed at standardising communication protocols

Journal ArticleDOI
TL;DR: The requirements for a graphical format on which a graphical on-line communications protocol can be based are described, and a conferencing software architecture for processing the proposed format is presented.
Abstract: This article describes the requirements for a graphical format on which a graphical on-line communications protocol can be based. It is argued that on-line graphical communications is similar to graphical session capture, and thus we propose an Interactive Graphical Communications Format using the GKSM session metafile. The discussion includes items that complement the GKSM metafile as a format for on-line interactive exchanges. One key application area of such a format is multimedia on-line conferencing. Therefore, we present a conferencing software architecture for processing the proposed format. This format specification is made available to those planning multimedia conferencing systems, as a contribution toward the development of a graphical communication protocol that will permit the interoperation of these systems.

Journal ArticleDOI
TL;DR: The concept of a “reference model” (RM) is defined and the need for RMs in standards development is established and three approaches in the development and use of a RM in database management system standardization are presented.


DOI
01 Nov 1986
TL;DR: This thesis finds the semi-automatic approach to protocol implementation offers several advantages over the conventional manual one, including correctness and modularity in protocol implementation code and reduction in implementation development time.
Abstract: Formal Description Techniques (FDTs) for specifying communication protocols, and the adopted FDT standards such as Estelle have opened a new door for the possibility of automating the implementation of a complex communication protocol directly from its specification. After a brief overview of Estelle FDT, we present the basic ideas and the encountered problems in developing a C-written Estelle compiler, which accepts an Estelle specification of protocols and produces a protocol implementation in C. The practicality of this tool --- the Estelle compiler --- has been examined via a semi-automatic implementation of the ISO class 2 Transport Protocol using the tool. A manual implementation in C/UNIX 4.2bsd of this protocol is also performed and compared with the semi-automatic implementation. We find the semi-automatic approach to protocol implementation offers several advantages over the conventional manual one. These advantages include correctness and modularity in protocol implementation code and reduction in implementation development time. In this thesis, we discuss our experience on using the semi-automatic approach in implementing the ISO class 2 Transport Protocol.