scispace - formally typeset
Search or ask a question
Author

Mahadevan Ganapathi

Other affiliations: Synopsys
Bio: Mahadevan Ganapathi is an academic researcher from Stanford University. The author has contributed to research in topics: Compiler & Code generation. The author has an hindex of 12, co-authored 21 publications receiving 538 citations. Previous affiliations of Mahadevan Ganapathi include Synopsys.

Papers
More filters
Journal ArticleDOI
TL;DR: A classlficaUon of automated retargetable code generation techniques and a survey of the work on these techmques is presented.
Abstract: A classlficaUon of automated retargetable code generation techniques and a survey of the work on these techmques is presented Retargetable code generation research.is classified into three categories: interpretive code generation, pattern-matched code generation, and table-driven code generatlon. Interpretive code generation approaches generate code for a virtual machine and then expand into real target code Pattern-matched code generation approaches separate the machine description from the code generation algorithm. Tabledriven code generation approaches employ a formal machine description and use a code-' generator generator to produce code generators automatically. An analysis Qf these techniques and a critique of automatic code generation algorithms are presented,

95 citations

Patent
22 Dec 1997
TL;DR: In this paper, the memory model includes a number of address bits corresponding to the address bits of the memory circuit, a number data bits corresponding with a number number of data bits of a memory circuit; and a memory type parameter corresponding to a type of memory circuit.
Abstract: A computer system including a memory model of a memory circuit. The computer system comprises a processor coupled to receive and manipulate the memory model, and a memory including the memory model. The memory model includes: a number of address bits corresponding to a number of address bits of the memory circuit; a number of data bits corresponding to a number of data bits of the memory circuit; and a memory type parameter corresponding to a type of the memory circuit.

76 citations

Journal ArticleDOI
TL;DR: Affix grammars are used to describe the instruction set of a target architecture for purposes of compiler code generation and a compiler built on this model can automatically perform most popular machine-dependent optimizations, including peephole optimizations.
Abstract: Affix grammars are used to describe the instruction set of a target architecture for purposes of compiler code generation A code generator is obtained automatically for a compiler using attributed parsing techniques A compiler built on this model can automatically perform most popular machine-dependent optimizations, including peephole optimizations Code generators based on this model demonstrate retargetability for the VAX1-11, iAPX2-86, Z-80003, PDP4-11, MC-68000, NS32032, FOM, and IBM-370 architectures

53 citations

Journal ArticleDOI
TL;DR: An algorithm for interprocedural data flow analysis has been implemented that produces three flow‐insensitive summary sets: MOD, USE and ALIASES and the utility of the resulting information was investigated using an optimizing Pascal compiler.
Abstract: The problem of tracking data flow across procedure boundaries has a long history of theoretical study by people who believed that such information would be useful for code optimization. Building upon previous work, an algorithm for interprocedural data flow analysis has been implemented. The algorithm produces three flow-insensitive summary sets: MOD, USE and ALIASES. The utility of the resulting information was investigated using an optimizing Pascal compiler. Over a sampling of 27 bench-marks, new optimizations performed as a result of interprocedural summary information contributed almost nothing to program execution speed. Finally, related optimization techniques of possibly greater potential are discussed.

37 citations


Cited by
More filters
01 Jan 1978
TL;DR: This ebook is the first authorized digital version of Kernighan and Ritchie's 1988 classic, The C Programming Language (2nd Ed.), and is a "must-have" reference for every serious programmer's digital library.
Abstract: This ebook is the first authorized digital version of Kernighan and Ritchie's 1988 classic, The C Programming Language (2nd Ed.). One of the best-selling programming books published in the last fifty years, "K&R" has been called everything from the "bible" to "a landmark in computer science" and it has influenced generations of programmers. Available now for all leading ebook platforms, this concise and beautifully written text is a "must-have" reference for every serious programmers digital library. As modestly described by the authors in the Preface to the First Edition, this "is not an introductory programming manual; it assumes some familiarity with basic programming concepts like variables, assignment statements, loops, and functions. Nonetheless, a novice programmer should be able to read along and pick up the language, although access to a more knowledgeable colleague will help."

2,120 citations

01 Jan 2005
TL;DR: This thesis presents an automatic partial evaluator for the Ansi C programming language, and proves that partial evaluation at most can accomplish linear speedup, and develops an automatic speedup analysis.
Abstract: Software engineers are faced with a dilemma. They want to write general and wellstructured programs that are flexible and easy to maintain. On the other hand, generality has a price: efficiency. A specialized program solving a particular problem is often significantly faster than a general program. However, the development of specialized software is time-consuming, and is likely to exceed the production of today’s programmers. New techniques are required to solve this so-called software crisis. Partial evaluation is a program specialization technique that reconciles the benefits of generality with efficiency. This thesis presents an automatic partial evaluator for the Ansi C programming language. The content of this thesis is analysis and transformation of C programs. We develop several analyses that support the transformation of a program into its generating extension. A generating extension is a program that produces specialized programs when executed on parts of the input. The thesis contains the following main results. • We develop a generating-extension transformation, and describe specialization of the various parts of C, including pointers and structures. • We develop constraint-based inter-procedural pointer and binding-time analysis. Both analyses are specified via non-standard type inference systems, and implemented by constraint solving. • We develop a side-effect and an in-use analysis. These analyses are developed in the classical monotone data-flow analysis framework. Some intriguing similarities with constraint-based analysis are observed. • We investigate separate and incremental program analysis and transformation. Realistic programs are structured into modules, which break down inter-procedural analyses that need global information about functions. • We prove that partial evaluation at most can accomplish linear speedup, and develop an automatic speedup analysis. • We study the stronger transformation technique driving, and initiate the development of generating super-extensions. The developments in this thesis are supported by an implementation. Throughout the chapters we present empirical results.

1,009 citations

Journal ArticleDOI
Mark N. Wegman1, F. Kenneth Zadeck1
TL;DR: Four algorithms, all conservitive in the sense that all constants may not be found, but each constant found is constant over all possible executions of the program, are presented.
Abstract: Constant propagation is a well-known global flow analysis problem. The goal of constant propagation is to discover values that are constant on all possible executions of a program and to propagate these constant values as far foward through the program as possible. Expressions whose operands are all constants can be evaluated at compile time and the results propagated further. Using the algorithms presented in this paper can produce smaller and faster compiled programs. The same algorithms can be used for other kinds of analyses (e.g., type of determination). We present four algorithms in this paper, all conservitive in the sense that all constants may not be found, but each constant found is constant over all possible executions of the program. These algorithms are among the simplest, fastest, and most powerful global constant propagation algorithms known. We also present a new algorithm that performs a form of interprocedural data flow analysis in which aliasing information is gathered in conjunction with constant progagation. Several variants of this algorithm are considered.

542 citations

Proceedings ArticleDOI
Kurt Keutzer1
01 Oct 1987
TL;DR: A solution to the problem of technology binding in terms of matching patterns, describing technology specific cells and optimizations, against a technology independent circuit represented as a directed acyclic graph is offered in DAGON.
Abstract: Technology binding is the process of mapping a technology independent description of a circuit into a particular technology. This paper outlines a formalism of this problem and offers a solution to the problem in terms of matching patterns, describing technology specific cells and optimizations, against a technology independent circuit represented as a directed acyclic graph. This solution is implemented in DAGON. DAGON rests on a firm algorithmic foundation, and is able to guarantee locally optimal matches against a set of over three thousand patterns. DAGON is an integral part of a synthesis system that has been found to provide industrial quality solutions to real circuit design problems.

507 citations

Journal ArticleDOI
Susan L. Graham1
TL;DR: A tree-manipulation language called twig has been developed to help construct efficient code generators that combines a fast top-down tree-pattern matching algorithm with dynamic programming.
Abstract: Compiler-component generators, such as lexical analyzer generators and parser generators, have long been used to facilitate the construction of compilers. A tree-manipulation language called twig has been developed to help construct efficient code generators. Twig transforms a tree-translation scheme into a code generator that combines a fast top-down tree-pattern matching algorithm with dynamic programming. Twig has been used to specify and construct code generators for several experimental compilers targeted for different machines.

338 citations