scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Functional Programming in 2021"


Journal ArticleDOI
TL;DR: Cogent as discussed by the authors is a functional programming language with a uniqueness type system, which eliminates the need for a trusted runtime or garbage collector while still guaranteeing memory safety, a crucial property for safety and security, and it allows the compiler to produce a proof via translation validation certifying the correctness of the generated C code with respect to the semantics of the Cogent source program.
Abstract: This paper presents a framework aimed at significantly reducing the cost of proving functional correctness for low-level operating systems components. The framework is designed around a new functional programming language, Cogent. A central aspect of the language is its uniqueness type system, which eliminates the need for a trusted runtime or garbage collector while still guaranteeing memory safety, a crucial property for safety and security. Moreover, it allows us to assign two semantics to the language: The first semantics is imperative, suitable for efficient C code generation, and the second is purely functional, providing a user-friendly interface for equational reasoning and verification of higher-level correctness properties. The refinement theorem connecting the two semantics allows the compiler to produce a proof via translation validation certifying the correctness of the generated C code with respect to the semantics of the Cogent source program. We have demonstrated the effectiveness of our framework for implementation and for verification through two file system implementations.

13 citations


Journal ArticleDOI
TL;DR: An expressive universe of syntaxes with binding is presented and it is demonstrated how to implement scope safe traversals once and for all by generic programming; and how to derive properties of these traversals by generic proving.
Abstract: The syntax of almost every programming language includes a notion of binder and corresponding bound occurrences, along with the accompanying notions of α-equivalence, capture-avoiding substitution, typing contexts, runtime environments, and so on. In the past, implementing and reasoning about programming languages required careful handling to maintain the correct behaviour of bound variables. Modern programming languages include features that enable constraints like scope safety to be expressed in types. Nevertheless, the programmer is still forced to write the same boilerplate over again for each new implementation of a scope-safe operation (e.g., renaming, substitution, desugaring, printing), and then again for correctness proofs. We present an expressive universe of syntaxes with binding and demonstrate how to (1) implement scope-safe traversals once and for all by generic programming; and (2) how to derive properties of these traversals by generic proving. Our universe description, generic traversals and proofs, and our examples have all been formalised in Agda and are available in the accompanying material available online at https://github.com/gallais/generic-syntax.

10 citations


Journal ArticleDOI
TL;DR: In this article, a type-directed operational semantics (TDOS) is proposed for calculi with disjoint intersection types and a merge operator, enabling expressive forms of object composition, and simple solutions to hard modularity problems.
Abstract: Calculi with disjoint intersection types support a symmetric merge operator with subtyping. The merge operator generalizes record concatenation to any type, enabling expressive forms of object composition, and simple solutions to hard modularity problems. Unfortunately, recent calculi with disjoint intersection types and the merge operator lack a (direct) operational semantics with expected properties such as determinism and subject reduction, and only account for terminating programs. This paper proposes a type-directed operational semantics (TDOS) for calculi with intersection types and a merge operator. We study two variants of calculi in the literature. The first calculus, called λi, is a variant of a calculus presented by Oliveira et al. (2016) and closely related to another calculus by Dunfield (2014). Although Dunfield proposes a direct small-step semantics for her calculus, her semantics lacks both determinism and subject reduction. Using our TDOS, we obtain a direct semantics for λi that has both properties. The second calculus, called λi+, employs the well-known subtyping relation of Barendregt, Coppo and Dezani-Ciancaglini (BCD). Therefore, λi+ extends the more basic subtyping relation of λi, and also adds support for record types and nested composition (which enables recursive composition of merged components). To fully obtain determinism, both λi and λi+ employ a disjointness restriction proposed in the original λi calculus. As an added benefit the TDOS approach deals with recursion in a straightforward way, unlike previous calculi with disjoint intersection types where recursion is problematic. We relate the static and dynamic semantics of λi to the original version of the calculus and the calculus by Dunfield. Furthermore, for λi+, we show a novel formulation of BCD subtyping, which is algorithmic, has a very simple proof of transitivity and allows for the modular addition of distributivity rules (i.e. without affecting other rules of subtyping). All results have been fully formalized in the Coq theorem prover.

9 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a method for explaining the results produced by dynamic programming (DP) algorithms based on retaining a granular representation of values that are aggregated during program execution.
Abstract: In this paper, we present a method for explaining the results produced by dynamic programming (DP) algorithms. Our approach is based on retaining a granular representation of values that are aggregated during program execution. The explanations that are created from the granular representations can answer questions of why one result was obtained instead of another and therefore can increase the confidence in the correctness of program results. Our focus on dynamic programming is motivated by the fact that dynamic programming offers a systematic approach to implementing a large class of optimization algorithms which produce decisions based on aggregated value comparisons. It is those decisions that the granular representation can help explain. Moreover, the fact that dynamic programming can be formalized using semirings supports the creation of a Haskell library for dynamic programming that has two important features. First, it allows programmers to specify programs by recurrence relationships from which efficient implementations are derived automatically. Second, the dynamic programs can be formulated generically (as type classes), which supports the smooth transition from programs that only produce result to programs that can run with granular representation and also produce explanations. Finally, we also demonstrate how to anticipate user questions about program results and how to produce corresponding explanations automatically in advance.

9 citations


Journal ArticleDOI
TL;DR: In this article, the authors use hs-to-coq to translate Haskell containers library into Coq, and verify it against specifications derived from a variety of sources including type class laws, the library's test suite, and interfaces from Coq's standard library.
Abstract: Good tools can bring mechanical verification to programs written in mainstream functional languages. We use hs-to-coq to translate significant portions of Haskell’s containers library into Coq, and verify it against specifications that we derive from a variety of sources including type class laws, the library’s test suite, and interfaces from Coq’s standard library. Our work shows that it is feasible to verify mature, widely used, highly optimized, and unmodified Haskell code. We also learn more about the theory of weight-balanced trees, extend hs-to-coq to handle partiality, and – since we found no bugs – attest to the superb quality of well-tested functional code.

7 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe an extension of dependently typed functional programming language Agda with cubical primitives, making it into a full-blown proof assistant with native support for univalence and a general schema of higher inductive types.
Abstract: Proof assistants based on dependent type theory provide expressive languages for both programming and proving within the same system However, all of the major implementations lack powerful extensionality principles for reasoning about equality, such as function and propositional extensionality These principles are typically added axiomatically which disrupts the constructive properties of these systems Cubical type theory provides a solution by giving computational meaning to Homotopy Type Theory and Univalent Foundations, in particular to the univalence axiom and higher inductive types (HITs) This paper describes an extension of the dependently typed functional programming language Agda with cubical primitives, making it into a full-blown proof assistant with native support for univalence and a general schema of HITs These new primitives allow the direct definition of function and propositional extensionality as well as quotient types, all with computational content Additionally, thanks also to copatterns, bisimilarity is equivalent to equality for coinductive types The adoption of cubical type theory extends Agda with support for a wide range of extensionality principles, without sacrificing type checking and constructivity

7 citations


Journal ArticleDOI
TL;DR: Mit der Einführung des Organisationsmodells Holacracy sollen nicht nur Abschottungseffekte durch Abteilungsbildung abgemildert, sondern es soll auch der Filterung of Informationen aufgrund von Hierarchie entgegengewirkt werden.
Abstract: Dieser Artikel geht der Frage nach, welche Funktionen das Managementkonzept Holacracy fur Organisationen erfullt und welche Folgen sich daraus ergeben. Als empirische Basis dienen dabei Daten, die in funf holakratischen Organisationen erhoben wurden. Die theoretische Grundlage bildet Luhmanns Konzept der formalen Organisation. Mit der Einfuhrung des Organisationsmodells Holacracy sollen nicht nur Abschottungseffekte durch Abteilungsbildung abgemildert, sondern es soll auch der Filterung von Informationen aufgrund von Hierarchie entgegengewirkt werden. Erkauft wird dies durch eine starke Durchformalisierung der Organisation. Ungewollte Nebenfolgen sind unter anderem das Wuchern der Formalstruktur, die Verunsicherung angesichts der sich schnell andernden Formalstruktur, die Moglichkeit des Entzugs der Arbeitskraft, die Reduzierung von Initiativen jenseits der Formalstruktur und die Starrheit des Rahmens aufgrund der vorgegebenen holakratischen Organisationsprinzipien.

6 citations


Journal ArticleDOI
TL;DR: In this paper, the authors argue that novice programmers should also be taught to consider the structure of output data, leading them also towards structurally corecursive programs, which is a core message of the influential textbook "How to Design Programs".
Abstract: The observation that program structure follows data structure is a key lesson in introductory programming: good hints for possible program designs can be found by considering the structure of the data concerned. In particular, this lesson is a core message of the influential textbook “How to Design Programs” by Felleisen, Findler, Flatt, and Krishnamurthi. However, that book discusses using only the structure of input data for guiding program design, typically leading towards structurally recursive programs. We argue that novice programmers should also be taught to consider the structure of output data, leading them also towards structurally corecursive programs.

5 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a region-based memory management scheme with support for generational garbage collection, which is implemented in the MLKit Standard ML compiler and compared with the performance of executables generated with the MLton state-of-the-art standard ML compiler.
Abstract: We present a region-based memory management scheme with support for generational garbage collection. The scheme features a compile-time region inference algorithm, which associates values with logical regions, and builds on a region type system that deploys region types at runtime to avoid the overhead of write barriers and to support partly tag-free garbage collection. The scheme is implemented in the MLKit Standard ML compiler, which generates native x64 machine code. Besides demonstrating a number of important formal properties of the scheme, we measure the scheme’s characteristics, for a number of benchmarks, and compare the performance of the generated executables with the performance of executables generated with the MLton state-of-the-art Standard ML compiler and configurations of the MLKit with and without region inference and generational garbage collection enabled. Although region inference often serves the purpose of generations, combining region inference with generational garbage collection is shown often to be superior to combining region inference with non-generational collection despite the overhead introduced by a larger amount of memory waste, due to region fragmentation.

4 citations


Journal ArticleDOI
TL;DR: In this paper, the authors define a notion of correctness for monadic sequential decision problems and identify three conditions that allow them to prove a correctness result for a monadic backward induction that is comparable to textbook correctness proofs for ordinary backward induction.
Abstract: In control theory, to solve a finite-horizon sequential decision problem (SDP) commonly means to find a list of decision rules that result in an optimal expected total reward (or cost) when taking a given number of decision steps. SDPs are routinely solved using Bellman’s backward induction. Textbook authors (e.g. Bertsekas or Puterman) typically give more or less formal proofs to show that the backward induction algorithm is correct as solution method for deterministic and stochastic SDPs. Botta, Jansson and Ionescu propose a generic framework for finite horizon, monadic SDPs together with a monadic version of backward induction for solving such SDPs. In monadic SDPs, the monad captures a generic notion of uncertainty, while a generic measure function aggregates rewards. In the present paper, we define a notion of correctness for monadic SDPs and identify three conditions that allow us to prove a correctness result for monadic backward induction that is comparable to textbook correctness proofs for ordinary backward induction. The conditions that we impose are fairly general and can be cast in category-theoretical terms using the notion of Eilenberg–Moore algebra. They hold in familiar settings like those of deterministic or stochastic SDPs, but we also give examples in which they fail. Our results show that backward induction can safely be employed for a broader class of SDPs than usually treated in textbooks. However, they also rule out certain instances that were considered admissible in the context of Botta et al. ’s generic framework. Our development is formalised in Idris as an extension of the Botta et al. framework and the sources are available as supplementary material.

4 citations


Journal ArticleDOI
TL;DR: The Beitrag geht der Frage nach, welche Folgen der Einführung postbürokratischer Arbeitsweisen in Großorganisationen sich mit Blick auf den Umgang mit Hierarchie beobachten lassen and wie diese aus organisationssoziologischer Sicht verortet werden können.
Abstract: Der Beitrag geht der Frage nach, welche Folgen der Einfuhrung postburokratischer Arbeitsweisen in Grosorganisationen sich mit Blick auf den Umgang mit Hierarchie beobachten lassen und wie diese aus organisationssoziologischer Sicht verortet werden konnen. Ziel ist es, der oft einseitig kritischen Debatte zur Rolle von Hierarchie in Organisationen eine differenzierte Perspektive hinzuzufugen, die die Frage nach Funktionen zum Ausgangspunkt der Analyse macht. Anhand zweier empirischer Falle wird vor dem Hintergrund aquivalenzfunktionalistischer Annahmen eine solche Analyse durchgefuhrt. Dabei werden die Bezugsprobleme von Strukturlosungen in Form formaler wie auch informaler Hierarchie identifiziert sowie ihre jeweiligen Folgeprobleme nachgezeichnet. Deutlich wird, dass die Realisierung postburokratischer Arbeitsweisen in den untersuchten Fallen durch die Entstehung von Spannungsfeldern zwischen konkurrierenden Erwartungsstrukturen sowohl an organisationalen Schnittstellen als auch innerhalb der postburokratischen Einheiten gepragt ist. Der Beitrag schliest mit einem kurzen Resumee und Ausblick zu moglichen Anschlussen an die hier vorgeschlagene Perspektive.

Journal ArticleDOI
TL;DR: In this paper, the geometrically convex monad has been formalized in the Coq proof assistant, from which reusable theories about mathematical structures such as convex spaces and concrete categories are integrated in a framework for monadic equational reasoning.
Abstract: The algebraic properties of the combination of probabilistic choice and nondeterministic choice have long been a research topic in program semantics. This paper explains a formalization in the Coq proof assistant of a monad equipped with both choices: the geometrically convex monad. This formalization has an immediate application: it provides a model for a monad that implements a non-trivial interface which allows for proofs by equational reasoning using probabilistic and nondeterministic effects. We explain the technical choices we made to go from the literature to a complete Coq formalization, from which we identify reusable theories about mathematical structures such as convex spaces and concrete categories, and that we integrate in a framework for monadic equational reasoning.

Journal ArticleDOI
TL;DR: In this article, a minimal approach to verify extensional equality in Martin-Lof's intensional type theories without extending the theories or using full-fledged setoids is presented.
Abstract: In verified generic programming, one cannot exploit the structure of concrete data types but has to rely on well chosen sets of specifications or abstract data types (ADTs). Functors and monads are at the core of many applications of functional programming. This raises the question of what useful ADTs for verified functors and monads could look like. The functorial map of many important monads preserves extensional equality. For instance, if are extensionally equal, that is, , , then and are also extensionally equal. This suggests that preservation of extensional equality could be a useful principle in verified generic programming. We explore this possibility with a minimalist approach: we deal with (the lack of) extensional equality in Martin-Lof’s intensional type theories without extending the theories or using full-fledged setoids. Perhaps surprisingly, this minimal approach turns out to be extremely useful. It allows one to derive simple generic proofs of monadic laws but also verified, generic results in dynamical systems and control theory. In turn, these results avoid tedious code duplication and ad-hoc proofs. Thus, our work is a contribution toward pragmatic, verified generic programming.


Journal ArticleDOI
TL;DR: In this paper, a linear-time solution to the problem is presented, using predicate logic, with a constructive proof of the greedy condition using a dependently typed proof assistant and calculating the greedy step as well as the final, linear time optimisation by equational reasoning.
Abstract: Consider the following puzzle: given a number, remove k digits such that the resulting number is as large as possible. Various techniques are employed to derive a linear-time solution to the puzzle: we justify the structure of a greedy algorithm by predicate logic, give a constructive proof of the greedy condition using a dependently typed proof assistant and calculate the greedy step as well as the final, linear-time optimisation by equational reasoning.

Journal ArticleDOI
TL;DR: In this article, the authors present a meta-theory for the Gradually Typed Lambda Calculus (GTLC) and its underlying cast calculus, which is defined in Agda.
Abstract: The research on gradual typing has led to many variations on the Gradually Typed Lambda Calculus (GTLC) of Siek & Taha (2006) and its underlying cast calculus. For example, Wadler and Findler (2009) added blame tracking, Siek et al. (2009) investigated alternate cast evaluation strategies, and Herman et al. (2010) replaced casts with coercions for space efficiency. The meta-theory for the GTLC has also expanded beyond type safety to include blame safety (Tobin-Hochstadt & Felleisen, 2006), space consumption (Herman et al., 2010), and the gradual guarantees (Siek et al., 2015). These results have been proven for some variations of the GTLC but not others. Furthermore, researchers continue to develop variations on the GTLC, but establishing all of the meta-theory for new variations is time-consuming. This article identifies abstractions that capture similarities between many cast calculi in the form of two parameterized cast calculi, one for the purposes of language specification and the other to guide space-efficient implementations. The article then develops reusable meta-theory for these two calculi, proving type safety, blame safety, the gradual guarantees, and space consumption. Finally, the article instantiates this meta-theory for eight cast calculi including five from the literature and three new calculi. All of these definitions and theorems, including the two parameterized calculi, the reusable meta-theory, and the eight instantiations, are mechanized in Agda making extensive use of module parameters and dependent records to define the abstractions.

Journal ArticleDOI
TL;DR: The Segments problem is put forward as an alternative challenge to investigate the problem-solving skills of functional programmers and gives rise to seven different high-level solution strategies that can be further divided into 17 subclasses.
Abstract: Abstract Elliot Soloway’s Rainfall problem is a well-known and well-studied problem to investigate the problem-solving strategies of programmers. Kathi Fisler investigated this programming challenge from the point of view of functional programmers. She showed that this particular challenge gives rise to five different high-level solution strategies, of which three are predominant and cover over 80% of all chosen solutions. In this study, we put forward the Segments problem as an alternative challenge to investigate the problem-solving skills of functional programmers. Analysis of the student solutions, their high-level solution strategies, and corresponding archetype solutions shows that the Segments problem gives rise to seven different high-level solution strategies that can be further divided into 17 subclasses. The Segments problem is particularly suited to investigate problem-solving skills that involve list processing and higher-order functions.

Journal ArticleDOI
TL;DR: Unternehmen sehen sich mit erheblichem Veränderungsdruck and sozial komplexen Problemstellungen konfrontiert, für die Agilität als Lösung gehandelt wird, anders als Entscheidungshilfe für eine Organisationsgestaltung im Einklang mit bewusst verhandelten Standpunkten.
Abstract: Unternehmen sehen sich mit erheblichem Veranderungsdruck und sozial komplexen Problemstellungen konfrontiert, fur die Agilitat als Losung gehandelt wird. Damit verbinden sie die Hoffnung, einen wenig prazise definierten Wandel zu schaffen. Im aktuellen Agilitatsdiskurs zeichnet sich vorlaufig weder im Arbeitsalltag noch in der Forschung ein Konsens uber den Begriff ab. Dieser bietet eine Projektionsflache, auf der unterschiedliche Weltsichten mit groser Wucht und wenig Reflexion aufeinanderprallen. Nach einer Analyse des Begriffs aus einer kognitiv-linguistischen Perspektive und der Untersuchung der Kernversprechen, die sich seit dem Beginn des Agilitatsdiskurses im ausgehenden 20. Jahrhundert mit dem Begriff verbinden, wird ein strukturierter Reflexionsprozess, die Patterns of Integrated Organization, vorgestellt. Diese dienen zur Klarung der auf den Begriff projizierten Sehnsucht und damit als Entscheidungshilfe fur eine Organisationsgestaltung im Einklang mit bewusst verhandelten Standpunkten.

Journal ArticleDOI
TL;DR: Van Strydonck et al. as mentioned in this paper propose a separation logic-proof compiler for dynamic contract checking by relying on support for capabilities, a well-studied form of unforgeable memory pointers that enables fine-grained, efficient memory access control.
Abstract: Separation logic is a powerful program logic for the static modular verification of imperative programs. However, dynamic checking of separation logic contracts on the boundaries between verified and untrusted modules is hard because it requires one to enforce (among other things) that outcalls from a verified to an untrusted module do not access memory resources currently owned by the verified module. This paper proposes an approach to dynamic contract checking by relying on support for capabilities, a well-studied form of unforgeable memory pointers that enables fine-grained, efficient memory access control. More specifically, we rely on a form of capabilities called linear capabilities for which the hardware enforces that they cannot be copied. We formalize our approach as a fully abstract compiler from a statically verified source language to an unverified target language with support for linear capabilities. The key insight behind our compiler is that memory resources described by spatial separation logic predicates can be represented at run time by linear capabilities. The compiler is separation-logic-proof-directed: it uses the separation logic proof of the source program to determine how memory accesses in the source program should be compiled to linear capability accesses in the target program. The full abstraction property of the compiler essentially guarantees that compiled verified modules can interact with untrusted target language modules as if they were compiled from verified code as well. This article is an extended version of one that was presented at ICFP 2019 (Van Strydonck et al., 2019).

Journal ArticleDOI
TL;DR: ARel as mentioned in this paper is a type-and-effect system for reasoning about the relative cost (the difference in the evaluation cost) of array-manipulating, higher order functional-imperative programs.
Abstract: Relational cost analysis aims at formally establishing bounds on the difference in the evaluation costs of two programs. As a particular case, one can also use relational cost analysis to establish bounds on the difference in the evaluation cost of the same program on two different inputs. One way to perform relational cost analysis is to use a relational type-and-effect system that supports reasoning about relations between two executions of two programs. Building on this basic idea, we present a type-and-effect system, called ARel, for reasoning about the relative cost (the difference in the evaluation cost) of array-manipulating, higher order functional-imperative programs. The key ingredient of our approach is a new lightweight type refinement discipline that we use to track relations (differences) between two mutable arrays. This discipline combined with Hoare-style triples built into the types allows us to express and establish precise relative costs of several interesting programs that imperatively update their data. We have implemented ARel using ideas from bidirectional type checking.

Journal ArticleDOI
TL;DR: StkTokens as discussed by the authors is a calling convention that provably enforces well-bracketed control flow and local state encapsulation on a capability machine and is based on linear capabilities.
Abstract: We propose and study StkTokens: a new calling convention that provably enforces well-bracketed control flow and local state encapsulation on a capability machine. The calling convention is based on linear capabilities: a type of capabilities that are prevented from being duplicated by the hardware. In addition to designing and formalizing this new calling convention, we also contribute a new way to formalize and prove that it effectively enforces well-bracketed control flow and local state encapsulation using what we call a fully abstract overlay semantics.

Journal ArticleDOI
TL;DR: In this article, the authors present their experience in modifying MLton, a whole-program optimizing compiler for Standard ML, for use in embedded and real-time domains. But they focus primarily on the language runtime, reworking the threading subsystem, object model, and garbage collector.
Abstract: There is a growing interest in leveraging functional programming languages in real-time and embedded contexts. Functional languages are appealing as many are strictly typed, amenable to formal methods, have limited mutation, and have simple but powerful concurrency control mechanisms. Although there have been many recent proposals for specialized domain-specific languages for embedded and real-time systems, there has been relatively little progress on adapting more general purpose functional languages for programming embedded and real-time systems. In this paper, we present our current work on leveraging Standard ML (SML) in the embedded and real-time domains. Specifically, we detail our experiences in modifying MLton, a whole-program optimizing compiler for SML, for use in such contexts. We focus primarily on the language runtime, reworking the threading subsystem, object model, and garbage collector. We provide preliminary results over a radar-based aircraft collision detector ported to SML.

Journal ArticleDOI
TL;DR: In this article, a new proof of Brouwerwerwer's monotone bar theorem holds for any bar that can be realized by a functional of type ℕ→ℕ)→ϕ in Godel's System T. Effectful forcing is an elementary alternative to standard sheaf-theoretic forcing arguments.
Abstract: Extending Martin Escardo’s effectful forcing technique, we give a new proof of a well-known result: Brouwer’s monotone bar theorem holds for any bar that can be realized by a functional of type (ℕ→ℕ)→ℕ in Godel’s System T. Effectful forcing is an elementary alternative to standard sheaf-theoretic forcing arguments, using ideas from programming languages, including computational effects, monads, the algebra interpretation of call-by-name λ-calculus, and logical relations. Our argument proceeds by interpreting System T programs as well-founded dialogue trees whose nodes branch on a query to an oracle of type ℕ→ℕ, lifted to higher type along a call-by-name translation. To connect this interpretation to the bar theorem, we then show that Brouwer’s famous “mental constructions” of barhood constitute an invariant form of these dialogue trees in which queries to the oracle are made maximally and in order.

Journal ArticleDOI
TL;DR: Aus erziehungswissenschaftlicher Sicht analysiert dieser Beitrag, welche Lernformen and pädagogischen Technologien im Alltag of Scrum-AnwenderInnen auffindbar sind, sodass von einer lernenden Organisation gesprochen werden könnte.
Abstract: Aus erziehungswissenschaftlicher Sicht analysiert dieser Beitrag, welche Lernformen und padagogischen Technologien im Alltag von Scrum-AnwenderInnen auffindbar sind, sodass von einer lernenden Organisation gesprochen werden konnte. Nach einer theoretischen Analyse des Scrum Guide folgen Einblicke in acht ExpertInneninterviews, die mittels einer rekonstruktiven Analyse auf vorhandene padagogische Ansatze, Technologien und Lernindikatoren untersucht werden, bevor abschliesend der Hypothese ›wenn eine Organisation agil arbeitet, lerne sie auch‹ nachgegangen wird.

Journal ArticleDOI
TL;DR: The lambda calculus λAS as mentioned in this paper, a simply typed lambda calculus with algebraic simplification, provides a foundation for studying a parallelisation of complex reductions by equational reasoning.
Abstract: Parallel reduction is a major component of parallel programming and widely used for summarisation and aggregation It is not well understood, however, what sorts of non-trivial summarisations can be implemented as parallel reductions This paper develops a calculus named λAS, a simply typed lambda calculus with algebraic simplification This calculus provides a foundation for studying a parallelisation of complex reductions by equational reasoning Its key feature is δ abstraction A δ abstraction is observationally equivalent to the standard λ abstraction, but its body is simplified before the arrival of its arguments using algebraic properties such as associativity and commutativity In addition, the type system of λAS guarantees that simplifications due to δ abstractions do not lead to serious overheads The usefulness of λAS is demonstrated on examples of developing complex parallel reductions, including those containing more than one reduction operator, loops with conditional jumps, prefix sum patterns and even tree manipulations

Journal ArticleDOI
TL;DR: In this article, an efficient regular expression (regex) matcher using a variety of program transformation techniques, but very little specialized formal language and automata theory, is presented.
Abstract: We show how to systematically derive an efficient regular expression (regex) matcher using a variety of program transformation techniques, but very little specialized formal language and automata theory. Starting from the standard specification of the set-theoretic semantics of regular expressions, we proceed via a continuation-based backtracking matcher, to a classical, table-driven state machine. All steps of the development are supported by self-contained (and machine-verified) equational correctness proofs.

Journal ArticleDOI
TL;DR: In this article, the authors systematically develop four calculi for gradual typing and the relations between them, building on and strengthening previous work, and provide a coherent foundation for design, implementation, and optimization of gradual types.
Abstract: C#, Dart, Pyret, Racket, TypeScript, VB: many recent languages integrate dynamic and static types via gradual typing. We systematically develop four calculi for gradual typing and the relations between them, building on and strengthening previous work. The calculi are as follows: , based on the blame calculus of Wadler and Findler (2009); , inspired by the coercion calculus of Henglein (1994); inspired by the space-efficient calculus of Herman, Tomb, and Flanagan (2006); and based on the threesome calculus of Siek and Wadler (2010). While and are little changed from previous work, and are new. Together, , , , and provide a coherent foundation for design, implementation, and optimization of gradual types. We define translations from to , from to , and from to . Much previous work lacked proofs of correctness or had weak correctness criteria; here we demonstrate the strongest correctness criterion one could hope for, that each of the translations is fully abstract. Each of the calculi reinforces the design of the others: has a particularly simple definition, and the subtle definition of blame safety for is justified by the simple definition of blame safety for . Our calculus is implementation-ready: the first space-efficient calculus that is both straightforward to implement and easy to understand. We give two applications: first, using full abstraction from to to establish an equational theory of coercions; and second, using full abstraction from to to easily establish the Fundamental Property of Casts, which required a custom bisimulation and six lemmas in earlier work.

Journal ArticleDOI
TL;DR: In this article, the task is to find the longest consecutive segment that is balanced in linear time, given a string of parentheses, by using a combination of techniques: the usual approach for solving segment problems and a theorem for constructing the inverse of a function, through which they derive an instance of shift-reduce parsing.
Abstract: Given a string of parentheses, the task is to find the longest consecutive segment that is balanced, in linear time. We find this problem interesting because it involves a combination of techniques: the usual approach for solving segment problems and a theorem for constructing the inverse of a function—through which we derive an instance of shift-reduce parsing.

Journal ArticleDOI
TL;DR: In this article, the authors present notions of refinement that preserve a concurrent value-dependent notion of noninterference that they have designed to support mixed-sensitivity concurrent programs, and demonstrate that these refinement notions are applicable to verified secure compilation, by exercising them on a single-pass compiler for mixed sensitivity concurrent programs that synchronise using mutex locks.
Abstract: Proving only over source code that programs do not leak sensitive data leaves a gap between reasoning and reality that can only be filled by accounting for the behaviour of the compiler. Furthermore, software does not always have the luxury of limiting itself to single-threaded computation with resources statically dedicated to each user to ensure the confidentiality of their data. This results in mixed-sensitivity concurrent programs, which might reuse memory shared between their threads to hold data of different sensitivity levels at different times; for such programs, a compiler must preserve the value-dependent coordination of such mixed-sensitivity reuse despite the impact of concurrency. Here we demonstrate, using Isabelle/HOL, that it is feasible to verify that a compiler preserves noninterference, the strictest kind of confidentiality property, for mixed-sensitivity concurrent programs. First, we present notions of refinement that preserve a concurrent value-dependent notion of noninterference that we have designed to support such programs. As proving noninterference-preserving refinement can be considerably more complex than the standard refinements typically used to verify semantics-preserving compilation, our notions include a decomposition principle that separates the semantics preservation from security preservation concerns. Second, we demonstrate that these refinement notions are applicable to verified secure compilation, by exercising them on a single-pass compiler for mixed-sensitivity concurrent programs that synchronise using mutex locks, from a generic imperative language to a generic RISC-style assembly language. Finally, we execute our compiler on a non-trivial mixed-sensitivity concurrent program modelling a real-world use case, thus preserving its source-level noninterference properties down to an assembly-level model automatically. All results are formalised and proved in the Isabelle/HOL interactive proof assistant. Our work paves the way for more fully featured compilers to offer verified secure compilation support to developers of multithreaded software that must handle data of multiple sensitivity levels.

Journal ArticleDOI
TL;DR: In this paper, two line and bar charts are used to create Figure 1 and Figure 2, respectively, with different values for each line chart and two different colors for each bar.
Abstract: Let’s say we want to create the two charts in Figure 1. The chart on the left is a bar chart that shows two different values for each bar. The chart on the right consists of two line charts that share the x axis with parts of the timeline highlighted using two different colors.