scispace - formally typeset
Search or ask a question
Author

Corrado Böhm

Bio: Corrado Böhm is an academic researcher from Sapienza University of Rome. The author has contributed to research in topics: Typed lambda calculus & Finite set. The author has an hindex of 10, co-authored 27 publications receiving 1936 citations.

Papers
More filters
Book
01 Jan 1979
TL;DR: Bohm and Jacopini as mentioned in this paper showed that any flowchart can be drawn from combinations of standard "structured" flowchart elements; they did not prove that any COBOL program can be written without goto statements.
Abstract: It is likely that most programmers who have heard anything at all about structured programming also have heard the mysterious names "Bohm" and "Jacopini." "Oh, yes," they'll say, "all that structured programming stuff was proved by those guys Bohm and Jacopini somewhere in Italy." And yet it's exceedingly unlikely that the average programmer has read the Bohm and Jacopini paper, "Flow Diagrams, Turing Machines and Languages with Only Two Formation Rules," published in 1966. As you begin to read the paper, it will become immediately obvious that the discussion is of an extremely abstract, theoretical nature. Serious academicians accustomed to a regular diet of such papers will no doubt wade through this one, too --- but the average COBOL application programmer probably will be overwhelmed by the time he or she reaches the end of the first paragraph. I have read the paper myself several dozen times during the past twelve years, and honesty requires me to admit that I barely understand it for a period of five minutes after reading the last paragraph, it is certain that I would be at an utter loss to try to describe Bohm and Jacopini's proof to anyone else. I say this to help prevent an inferiority complex on the part of the average reader. This is an important paper, and it does form the theoretical basis of structured programming. So you should read it. But don't feel too embarrassed if most of it is over your head. Indeed, you'll find "The Translation of 'go to' Programs to 'while' Programs," by Ashcroft and Manna [Paper 6], to be much more understandable: They, too, show that any program can be written as a structured program by simply demonstrating that any program can be translated into an equivalent structured program. One last comment about the paper by Bohm and Jacopini: Note that it does not mention programming languages. It describes a set of flow diagrams as a "two-dimensional programming language," but it makes no mention of COBOL, ALGOL, PL/I, or any of the other languages that real-world programmers use. This is far more significant than you might think. What Bohm and Jacopini have proved in this paper is that any flowchart can be drawn from combinations of standard "structured" flowchart elements; they did not prove that any COBOL program can be written without goto statements. Indeed, if a programming language lacks the constructs to implement directly the Bohm and Jacopini flowcharts, goto-less programming is exceedingly difficult, if not impossible --- as witnessed by such degenerate languages as FORTRAN. This distinction between the theoretical possibility of structured programming, and the real-world practicality of structured programming is, of course, at the heart of the controversy and the squabbling that still goes on today, more than a decade after this paper was written. Examples of this controversy --- the theme of which is, "Yes, we know structured programming is theoretically possible, but do we really want to do it?" --- are seen in papers like Martin Hopkins' "A Case for the GOTO" [Paper 9], or Donald Knuth's "Structured Programming with go to Statements" [Paper 20].

732 citations

Journal ArticleDOI
TL;DR: In the first part of the paper, flow diagrams are introduced to represent inter ah mappings of a set into itself due to a suitable extension of the given set and of the basic mappings defined in it.
Abstract: In the first part of the paper, flow diagrams are introduced to represent inter ah mappings of a set into itself. Although not every diagram is decomposable into a finite numbm of given base diagrams, this becomes hue at a semantical level due to a suitable extension of the given set and of the basic mappings defined in it. Two normalization methods of flow diagrams are given. The first has |hree base diagrams; the second, only two. In the second part of the paper, the second method is applied to 'lhe theory of Turing machines. With every Turing maching provided with a two-way half-tape, ihere is associated a similar machine, doing essentially 'lhe same job, but working on a tape obtained from the first one by interspersing alternate blank squares. The new machine belongs to the family, elsewhere introduced, generated by composition and iteration from the two machines X and R. That family is a proper subfamily of the whole family of Turing machines.

729 citations

Journal ArticleDOI
TL;DR: The notion of iteratively defined functions from and to heterogeneous term algebras is introduced as the solution of a finite set of equations of a special shape and an extension of the paradigms to the synthesis of functions of higher complexity is considered and exemplified.

248 citations

Journal ArticleDOI
TL;DR: In this paper, it was shown that a finite set of normal combinators which are pairwise non α-η-convertible is discriminable, and a discrimination algorithm was given.

45 citations

Book ChapterDOI
28 Sep 1992
TL;DR: In this paper, the notion of a canonical algebraic term rewriting system is introduced and interpreted in the lambda calculus by the Bohm-Piperno technique in such a way that strong normalization is preserved.
Abstract: We formalize a technique introduced by Bohm and Piperno to solve systems of recursive equations in lambda calculus without the use of the fixed point combinator and using only normal forms. To this aim we introduce the notion of a canonical algebraic term rewriting system, and we show that any such system can be interpreted in the lambda calculus by the Bohm — Piperno technique in such a way that strong normalization is preserved. This allows us to improve some recent results of Mogensen concerning efficient godelizations ⌈⌉:Λ→Λ of lambda calculus. In particular we prove that under a suitable godelization there exist two lambda terms E (self-interpreter) and R (reductor), both having a normal form, such that for every (closed or open) lambda term M E⌈M⌉→M and if M has a normal form N, then R⌈M⌉→⌈N⌉.

32 citations


Cited by
More filters
Book
01 Jan 1982
TL;DR: Software Engineering A Practitioner's Approach recognizes the dramatic growth in the field of software engineering and emphasizes new and important methods and tools used in the industry.
Abstract: From the Publisher: Well-suited for both the student and the working professional,Software Engineering A Practitioner's Approach recognizes the dramatic growth in the field of software engineering and emphasizes new and important methods and tools used in the industry.

8,224 citations

01 Jan 1992
TL;DR: This thesis gives some conservative algorithms for determining whether a program satisfies constraints, and describes how to use this design information to refactor a program.
Abstract: This thesis defines a set of program restructuring operations (refactorings) that support the design, evolution and reuse of object-oriented application frameworks. The focus of the thesis is on automating the refactorings in a way that preserves the behavior of a program. The refactorings are defined to be behavior preserving, provided that their preconditions are met. Most of the refactorings are simple to implement and it is almost trivial to show that they are behavior preserving. However, for a few refactorings, one or more of their preconditions are in general undecidable. Fortunately, for some cases it can be determined whether these refactorings can be applied safely. Three of the most complex refactorings are defined in detail: generalizing the inheritance hierarchy, specializing the inheritance hierarchy and using aggregations to model the relationships among classes. These operations are decomposed into more primitive parts, and the power of these operations is discussed from the perspectives of automatability and usefulness in supporting design. Two design constraints needed in refactoring are class invariants and exclusive components. These constraints are needed to ensure that behavior is preserved across some refactorings. This thesis gives some conservative algorithms for determining whether a program satisfies these constraints, and describes how to use this design information to refactor a program.

1,193 citations

Book
01 Jan 1979
TL;DR: This paper presents a meta-modelling simulation of the response of the immune system to changes in the environment through the course of natural selection.
Abstract: • Constraint domains in which the possible values that a variable can take are restricted to a finite set. • Examples: Boolean constraints, or integer constraints in which each variable is constrained to lie in within a finite range of integers. • Widely used in constraint programming. • Many real problems can be easily represented using constraint domains, e.g.: scheduling, routing and timetabling. • They involve choosing amongst a finite number of possibilities. • Commercial importance to many businesses: e.g. deciding how air crews should be allocated to aircraft flights. • Developed methods by different research communities: Arc and node consistency techniques (artificial intelligence). Bounds propagation techniques (constraint programming). Integer programming (operations research).

1,123 citations

Book
31 Jul 2013
TL;DR: The Lambda Calculus has been extended with types and used in functional programming (Haskell, Clean) and proof assistants (Coq, Isabelle, HOL), used in designing and verifying IT products and mathematical proofs.
Abstract: This handbook with exercises reveals in formalisms, hitherto mainly used for hardware and software design and verification, unexpected mathematical beauty. The lambda calculus forms a prototype universal programming language, which in its untyped version is related to Lisp, and was treated in the first author's classic The Lambda Calculus (1984). The formalism has since been extended with types and used in functional programming (Haskell, Clean) and proof assistants (Coq, Isabelle, HOL), used in designing and verifying IT products and mathematical proofs. In this book, the authors focus on three classes of typing for lambda terms: simple types, recursive types and intersection types. It is in these three formalisms of terms and types that the unexpected mathematical beauty is revealed. The treatment is authoritative and comprehensive, complemented by an exhaustive bibliography, and numerous exercises are provided to deepen the readers' understanding and increase their confidence using types.

927 citations

Journal ArticleDOI
TL;DR: My considerations are that, although the programmer's activity ends when he has constructed a correct program, the process taking place under control of his program is the true subject matter of his activity, and that his intellectual powers are rather geared to master static relations and his powers to visualize processes evolving in time are relatively poorly developed.
Abstract: For a number of years I have been familiar with the observation that the quality of programmers is a decreasing function of the density of go to statements in the programs they produce. More recently I discovered why the use of the go to statement has such disastrous effects, and I became convinced that the go to statement should be abolished from all "higher level" programming languages (i.e. everything except, perhaps, plain machine Code). At'that time I did not attach too much importance to this discovery ; I now submit my considerations for publication because in very recent discussions in which the subject turned up, I have been urged to do so. My first remark is that, although the programmer's activity ends when he has constructed a correct program, the process taking place under control of his program is the true subject matter of his activity, for it is this process that has to accomplish the desired effect; it is this process that in its dynamic behavior has to satisfy the desired specifications. Yet, once the program has been made, the "making" of the corresponding process is delegated to the machine. My second remark is that our intellectual powers are rather geared to master static relations and that our powers to visualize processes evolving in time are relatively poorly developed. For that reason we should do (as wise programmers aware of our limitations) our utmost to shorten the conceptual gap between the static program and the dynamic process, to make the correspondence between the program (spread out in text space) and the process (spread out in time) as trivial as possible. Let us now consider how we can characterize the progress of a process. (You may think about this question in a very concrete manner: suppose that a process, considered as a time succession of actions, is stopped after an arbitrary action, what data do we have to fix in order that we can redo the process until the very same point?) If the program text is a pure concatenation of, say, assignment statements (for the purpose of this discussion regarded as the descriptions of single actions) it is sufficient to point in the program text to a point between two successive action descriptions. (In the absence of go to statements I can permit myself the syntactic ambiguity in the last three words of the previous sentence: if we parse …

911 citations