scispace - formally typeset
Search or ask a question
Author

Cliff Click

Bio: Cliff Click is an academic researcher from Rice University. The author has contributed to research in topics: Program optimization & Partial redundancy elimination. The author has an hindex of 2, co-authored 2 publications receiving 189 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This article presents a framework for combining constant propagation, value numbering, and unreachable-code elimination, and shows how to combine two such frameworks and how to reason about the properties of the resulting framework.
Abstract: Modern optimizing compilers use several passes over a program's intermediate representation to generate good code. Many of these optimizations exhibit a phase-ordering problem. Getting the best code may require iterating optimizations until a fixed point is reached. Combining these phases can lead to the discovery of more facts about the program, exposing more opportunities for optimization. This article presents a framework for describing optimizations. It shows how to combine two such frameworks and how to reason about the properties of the resulting framework. The structure of the frame work provides insight into when a combination yields better results. To make the ideas more concrete, this article presents a framework for combining constant propagation, value numbering, and unreachable-code elimination. It is an open question as to what other frameworks can be combined in this way.

173 citations

Journal ArticleDOI
TL;DR: This article proposes a framework that identifies three sins of reasoning that lead to unsound claims and two sins of exposition that Lead to poorly described claims and evaluations and provides practitioners with a principled way of critiquing the integrity of their own work and the work of others.
Abstract: An unsound claim can misdirect a field, encouraging the pursuit of unworthy ideas and the abandonment of promising ideas. An inadequate description of a claim can make it difficult to reason about the claim, for example, to determine whether the claim is sound. Many practitioners will acknowledge the threat of unsound claims or inadequate descriptions of claims to their field. We believe that this situation is exacerbated, and even encouraged, by the lack of a systematic approach to exploring, exposing, and addressing the source of unsound claims and poor exposition.This article proposes a framework that identifies three sins of reasoning that lead to unsound claims and two sins of exposition that lead to poorly described claims and evaluations. Sins of exposition obfuscate the objective of determining whether or not a claim is sound, while sins of reasoning lead directly to unsound claims.Our framework provides practitioners with a principled way of critiquing the integrity of their own work and the work of others. We hope that this will help individuals conduct better science and encourage a cultural shift in our research community to identify and promulgate sound claims.

28 citations


Cited by
More filters
23 Apr 2001
TL;DR: The Java HotSpotTM Server Compiler achieves improved asymptotic performance through a combination of object-oriented and classical-compiler optimizations.
Abstract: The Java HotSpotTM Server Compiler achieves improved asymptotic performance through a combination of object-oriented and classical-compiler optimizations. Aggressive inlining using class-hierarchy analysis reduces function call overhead and provides opportunities for many compiler optimizations.

300 citations

Proceedings ArticleDOI
01 May 1996
TL;DR: This work targets general- purpose, imperative programming languages, initially C, and strives for both fast dynamic compilation and high-quality dynamically-compiled code.
Abstract: Dynamic compilation enables optimization based on the values of invariant data computed at run-time. Using the values of these run-time constants, a dynamic compiler can eliminate their memory loads, perform constant propagation and folding, remove branches they determine, and fully unroll loops they bound. However, the performance benefits of the more efficient, dynamically-compiled code are offset by the run-time cost of the dynamic compile. Our approach to dynamic compilation strives for both fast dynamic compilation and high-quality dynamically-compiled code: the programmer annotates regions of the programs that should be compiled dynamically; a static, optimizing compiler automatically produces pre-optimized machine-code templates, using a pair of dataflow analyses that identify which variables will be constant at run-time; and a simple, dynamic compiler copies the templates, patching in the computed values of the run-time constants, to produce optimized, executable code. Our work targets general- purpose, imperative programming languages, initially C. Initial experiments applying dynamic compilation to C programs have produced speedups ranging from 1.2 to 1.8.

203 citations

Proceedings ArticleDOI
27 Feb 2021
TL;DR: MLIR as discussed by the authors is an approach to building reusable and extensible compiler infrastructure that facilitates the design and implementation of code generators, translators and optimizers at different levels of abstraction and across application domains, hardware targets and execution environments.
Abstract: This work presents MLIR, a novel approach to building reusable and extensible compiler infrastructure. MLIR addresses software fragmentation, compilation for heterogeneous hardware, significantly reducing the cost of building domain specific compilers, and connecting existing compilers together. MLIR facilitates the design and implementation of code generators, translators and optimizers at different levels of abstraction and across application domains, hardware targets and execution environments. The contribution of this work includes (1) discussion of MLIR as a research artifact, built for extension and evolution, while identifying the challenges and opportunities posed by this novel design, semantics, optimization specification, system, and engineering. (2) evaluation of MLIR as a generalized infrastructure that reduces the cost of building compilers---describing diverse use-cases to show research and educational opportunities for future programming languages, compilers, execution environments, and computer architecture. The paper also presents the rationale for MLIR, its original design principles, structures and semantics.

162 citations

Proceedings ArticleDOI
Cliff Click1
01 Jun 1995
TL;DR: This paper argues that optimizing compilers should treat the machine-independent optimizations (e.g., conditional constant propagation, global value numbering) and code motion issues separately, which allows stronger optimizations using simpler algorithms.
Abstract: We believe that optimizing compilers should treat the machine-independent optimizations (e.g., conditional constant propagation, global value numbering) and code motion issues separately.’ Removing the code motion requirements from the machine-independent optimization allows stronger optimizations using simpler algorithms. Preserving a legal schedule is one of the prime sources of complexity in algorithms like PRE [18, 13] or global congruence finding [2, 20].

158 citations

Proceedings ArticleDOI
01 Oct 1996
TL;DR: The Vortex compiler infrastructure is developed, a language-independent optimizing compiler for object-oriented languages, with front-ends for Cecil, C++, Java, and Modula-3, and the results of experiments assessing the effectiveness of different combinations of optimizations on sizable applications across these four languages are reported.
Abstract: Previously, techniques such as class hierarchy analysis and profile-guided receiver class prediction have been demonstrated to greatly improve the performance of applications written in pure object-oriented languages, but the degree to which these results are transferable to applications written in hybrid languages has been unclear. In part to answer this question, we have developed the Vortex compiler infrastructure, a language-independent optimizing compiler for object-oriented languages, with front-ends for Cecil, C++, Java, and Modula-3. In this paper, we describe the Vortex compiler's intermediate language, internal structure, and optimization suite, and then we report the results of experiments assessing the effectiveness of different combinations of optimizations on sizable applications across these four languages. We characterize the benchmark programs in terms of a collection of static and dynamic metrics, intended to quantify aspects of the "object-orientedness" of a program.

154 citations