scispace - formally typeset
Open AccessProceedings Article

The Semantics of a Simple Language for Parallel Programming.

Gilles Kahn
- pp 471-475
TLDR
A simple language for parallel programming is described and its mathematical properties are studied to make a case for more formal languages for systems programming and the design of operating systems.
Abstract
In this paper, we describe a simple language for parallel programming. Its semantics is studied thoroughly. The desirable properties of this language and its deficiencies are exhibited by this theoretical study. Basic results on parallel program schemata are given. We hope in this way to make a case for more formal (i.e. mathematical) approach to the design of languages for systems programming and the design of operating systems. There is a wide disagreement among systems designers as to what are the best primitives for writing systems programs. In this paper, we describe a simple language for parallel programming and study its mathematical properties. 1. A SIMPLE LANGUAGE FOR PARALLEL PROGRAMMING The features of our mini-language are exhibited on the sample program S on Figure 1. The conventions are close to Algol1 and we only insist upon the new features. The program S consists of a set of declarations and a body. Variables of type integer channel are declared at line (1), and for any simple type σ (boolean, real, etc. . . ) we could have declared a σ channel. Then processes f , g and h are declared, much like procedures. Aside from usual parameters (passed by value in this example, like INIT at line (3)), we can declare in the heading of the process how it is linked to other processes : at line (2) f is stated to communicate via two input lines that can carry integers, and one similar output line. The body of a process is an usual Algol program except for invocation of wait until something on an input line (e.g. at (4)) or send a variable on a line of compatible type (e.g. at (5)). The process stays blocked on a wait until something is being sent on this line by another process, but nothing can prevent a process from performing a send on a line. In others words, processes communicate via first-in first-out (fifo) queues. Calling instances of the processes is done in the body of the main program at line (6) where the actual names of he channels are bound to the formal parameters of the processes. The infix operator par initiates the concurrent activation of the processes. Such a style of programming is close to may systems using EVENT mechanisms ([1, 2, 3, 4]). A pictorial representation of the program is the schema P on Figure 2, where the nodes represent processes and the arcs communication channels between these processes. What sort of things would we like to prove on a program like S? Firstly, that all processes in S run forever. Secondly, Begin (1) In t eg e r channel X, Y, Z , T1 , T2 ; (2 ) Process f ( i n t e r g e r in U,V; i n t e r g e r out W) ; Begin i n t e g e r I ; l o g i c a l B; B := true ; Repeat Begin (4 ) I := i f B then wait (U) e l s e wait (V) ; (7 ) p r in t ( I ) ; (5 ) send I on W; B := not B; End ; End ; Process g ( i n t e g e r in U ; i n t e g e r out V, W) ; Begin i n t e g e r I ; l o g i c a l B; B := true ; Repeat Begin I := wait (U) ; i f B then send I on V e l s e send I on W : B := not B; End ; End ; (3 ) Process h( i n t e g e r in U; i n t e g e r out V; i n t e g e r INIT ) ; Begin i n t e g e r I ; send INIT on V; Repeat Begin I := wait (U) ; send I on V; End ; End ; Comment : body o f mainprogram ; (6 ) f (X,Y,Z) par g (X,T1 ,T2) par h(T1 ,Y, 0 ) par h(T2 , Z , 1 ) ; End ; Figure 1: Sample parallel program S. more precisely, that S prints out (at line (7)) an alternating sequence of 0’s and 1’s forever. Third, that if one of the processes were to stop at some time for an extraneous reason, the whole systems would stop. The ability to state formally this kind of property of a parallel program and to prove them within a formal logical framework is the central motivation for the theoretical study of the next sections. 2. PARALLEL COMPUTATION Informally speaking, a parallel computation is organized in the following way: some autonomous computing stations are connected to each other in a network by communication lines. Computing stations exchange information through these lines. A given station computes on data coming along

read more

Citations
More filters
Journal ArticleDOI

Communicating sequential processes

TL;DR: It is suggested that input and output are basic primitives of programming and that parallel composition of communicating sequential processes is a fundamental program structuring method.
Proceedings ArticleDOI

Universally composable security: a new paradigm for cryptographic protocols

TL;DR: The notion of universally composable security was introduced in this paper for defining security of cryptographic protocols, which guarantees security even when a secure protocol is composed of an arbitrary set of protocols, or more generally when the protocol is used as a component of a system.
Journal ArticleDOI

Generative communication in Linda

TL;DR: This work is particularly concerned with implementation of the dynamic global name space that the generative communication model requires, and its implications for systems programming in distributed settings generally and on integrated network computers in particular.
Journal ArticleDOI

The synchronous data flow programming language LUSTRE

TL;DR: The authors describe LUSTRE, a data flow synchronous language designed for programming reactive systems-such as automatic control and monitoring systems-as well as for describing hardware.
References
More filters
Journal ArticleDOI

The nucleus of a multiprogramming system

TL;DR: This paper describes the philosophy and structure of a multi-programming system that can be extended with a hierarchy of operating systems to suit diverse requirements of program scheduling and resource allocation.
Journal ArticleDOI

Inductive methods for proving properties of programs

TL;DR: There are two main purposes in this paper, clarification and extension of known results about computation of recursive programs, with emphasis on the difference between the theoretical and practical approaches.
Proceedings ArticleDOI

Implementation and applications of Scott's logic for computable functions

TL;DR: It is shown how the syntax and semantics of a simple programming language may be described completely in the logic, and an example of a theorem which relates syntactic and semantic properties of programs and which can be stated and proved within the logic is given.
Book

Recursive definitions of partial functions and their computations

TL;DR: In this article, the authors present a syntactic and semantic model for recursive definitions, and study the relation between their computed functions and their fixpoints, which are also computed functions of the recursive definition.

Proof-techniques for recursive programs.

TL;DR: The concept of least fixed-point of a continuous function can be considered as the unifying thread of the report as discussed by the authors and the connections between fixed-points and recursive programs are detailed in Chapter 2, providing some insights on practical implementations of recursion.