scispace - formally typeset
Search or ask a question
Author

Jean-Yves Ratajszczak

Bio: Jean-Yves Ratajszczak is an academic researcher from École Normale Supérieure. The author has contributed to research in topics: Compiler & Computer program. The author has an hindex of 1, co-authored 1 publications receiving 36 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper describes a neural compiler, a compiler that produces a neural network that computes what is specified by the PASCAL program and generates an intermediate code called cellular code.

41 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: An artificial developmental system that is a computationally efficient technique for the automatic generation of complex artificial neural networks (ANNs) and some simulation results showing that the same problem cannot be solved if the mechanism for automatic definition of subnetworks is suppressed.
Abstract: This article illustrates an artificial developmental system that is a computationally efficient technique for the automatic generation of complex artificial neural networks (ANNs). The artificial developmental system can develop a graph grammar into a modular ANN made of a combination of simpler subnetworks. A genetic algorithm is used to evolve coded grammars that generate ANNs for controlling six-legged robot locomotion. A mechanism for the automatic definition of neural subnetworks is incorporated Using this mechanism, the genetic algorithm can automatically decompose a problem into subproblems, generate a subANN for solving the subproblem, and instantiate copies of this subANN to build a higher-level ANN that solves the problem. We report some simulation results showing that the same problem cannot be solved if the mechanism for automatic definition of subnetworks is suppressed. We support our argument with pictures that describe the steps of development, how ANN structures are evolved, and how the AN...

294 citations

Proceedings ArticleDOI
TL;DR: NATURALIZE as mentioned in this paper is a framework that learns the style of a codebase and suggests revisions to improve stylistic consistency, which can even transfer knowledge about coding conventions across projects.
Abstract: Every programmer has a characteristic style, ranging from preferences about identifier naming to preferences about object relationships and design patterns. Coding conventions define a consistent syntactic style, fostering readability and hence maintainability. When collaborating, programmers strive to obey a project's coding conventions. However, one third of reviews of changes contain feedback about coding conventions, indicating that programmers do not always follow them and that project members care deeply about adherence. Unfortunately, programmers are often unaware of coding conventions because inferring them requires a global view, one that aggregates the many local decisions programmers make and identifies emergent consensus on style. We present NATURALIZE, a framework that learns the style of a codebase, and suggests revisions to improve stylistic consistency. NATURALIZE builds on recent work in applying statistical natural language processing to source code. We apply NATURALIZE to suggest natural identifier names and formatting conventions. We present four tools focused on ensuring natural code during development and release management, including code review. NATURALIZE achieves 94% accuracy in its top suggestions for identifier names and can even transfer knowledge about conventions across projects, leveraging a corpus of 10,968 open source projects. We used NATURALIZE to generate 18 patches for 5 open source projects: 14 were accepted.

240 citations

Proceedings Article
06 Aug 2017
TL;DR: In this article, an end-to-end differentiable interpreter for the programming language Forth is presented, which enables programmers to write program sketches with slots that can be filled with behaviour trained from program input-output data.
Abstract: Given that in practice training data is scarce for all but a small set of problems, a core question is how to incorporate prior knowledge into a model. In this paper, we consider the case of prior procedural knowledge for neural networks, such as knowing how a program should traverse a sequence, but not what local actions should be performed at each step. To this end, we present an end-to-end differentiable interpreter for the programming language Forth which enables programmers to write program sketches with slots that can be filled with behaviour trained from program input-output data. We can optimise this behaviour directly through gradient descent techniques on user-specified objectives, and also integrate the program into any larger neural computation graph. We show empirically that our interpreter is able to effectively leverage different levels of prior program structure and learn complex behaviours such as sequence sorting and addition. When connected to outputs of an LSTM and trained jointly, our interpreter achieves state-of-the-art accuracy for end-to-end reasoning about quantities expressed in natural language stories.

70 citations

Posted Content
TL;DR: An exhaustive evaluation on the task of checking equivalence on a highly diverse class of symbolic algebraic and boolean expression types is performed, showing that the proposed neural equivalence networks model significantly outperforms existing architectures.
Abstract: Combining abstract, symbolic reasoning with continuous neural reasoning is a grand challenge of representation learning. As a step in this direction, we propose a new architecture, called neural equivalence networks, for the problem of learning continuous semantic representations of algebraic and logical expressions. These networks are trained to represent semantic equivalence, even of expressions that are syntactically very different. The challenge is that semantic representations must be computed in a syntax-directed manner, because semantics is compositional, but at the same time, small changes in syntax can lead to very large changes in semantics, which can be difficult for continuous neural architectures. We perform an exhaustive evaluation on the task of checking equivalence on a highly diverse class of symbolic algebraic and boolean expression types, showing that our model significantly outperforms existing architectures.

69 citations

Posted Content
TL;DR: An end-to-end differentiable interpreter for the programming language Forth which enables programmers to write program sketches with slots that can be filled with behaviour trained from program input-output data, and shows empirically that this interpreter is able to effectively leverage different levels of prior program structure and learn complex behaviours such as sequence sorting and addition.
Abstract: Given that in practice training data is scarce for all but a small set of problems, a core question is how to incorporate prior knowledge into a model In this paper, we consider the case of prior procedural knowledge for neural networks, such as knowing how a program should traverse a sequence, but not what local actions should be performed at each step To this end, we present an end-to-end differentiable interpreter for the programming language Forth which enables programmers to write program sketches with slots that can be filled with behaviour trained from program input-output data We can optimise this behaviour directly through gradient descent techniques on user-specified objectives, and also integrate the program into any larger neural computation graph We show empirically that our interpreter is able to effectively leverage different levels of prior program structure and learn complex behaviours such as sequence sorting and addition When connected to outputs of an LSTM and trained jointly, our interpreter achieves state-of-the-art accuracy for end-to-end reasoning about quantities expressed in natural language stories

50 citations