scispace - formally typeset
Search or ask a question
Topic

Constant (computer programming)

About: Constant (computer programming) is a research topic. Over the lifetime, 223 publications have been published within this topic receiving 4360 citations.


Papers
More filters
Book ChapterDOI
08 Apr 2002
TL;DR: An algorithm which takes a past time LTL formula and generates an efficient dynamic programming algorithm is presented, which is to construct a flexible framework for monitoring and analyzing program executions.
Abstract: The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. An algorithm which takes a past time LTL formula and generates an efficient dynamic programming algorithm is presented. The generated algorithm tests whether the formula is satisfied by a finite trace of events given as input and runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula. Further optimizations of the algorithm are suggested. Past time operators suitable for writing succinct specifications are introduced and shown definitionally equivalent to the standard operators. This work is part of the PathExplorer project, the objective of which it is to construct a flexible framework for monitoring and analyzing program executions.

381 citations

Book
01 Jan 1993
TL;DR: This book provides the general mathematical background needed for loop transformations (including those basic mathematical algorithms), discusses data dependence, and introduces the major transformations.
Abstract: Automatic transformation of a sequential program into a parallel form is a subject that presents a great intellectual challenge and promises great practical rewards. There is a tremendous investment in existing sequential programs, and scientists and engineers continue to write their application programs in sequential languages (primarily in Fortran),but the demand for increasing speed is constant. The job of a restructuring compiler is to discover the dependence structure of a given program and transform the program in a way that is consistent with both that dependence structure and the characteristics of the given machine. Much attention in this field of research has been focused on the Fortran do loop. This is where one expects to find major chunks of computation that need to be performed repeatedly for different values of the index variable. Many loop transformations have been designed over the years, and several of them can be found in any parallelizing compiler currently in use in industry or at a university research facility. Loop Transformations for Restructuring Compilers: The Foundations provides a rigorous theory of loop transformations. The transformations are developed in a consistent mathematical framework using objects like directed graphs, matrices and linear equations. The algorithms that implement the transformations can then be precisely described in terms of certain abstract mathematical algorithms. The book provides the general mathematical background needed for loop transformations (including those basic mathematical algorithms), discusses data dependence, and introduces the major transformations. The next volume will build a detailed theory of loop transformations based on the material developed here. Loop Transformations for Restructuring Compilers: The Foundations presents a theory of loop transformations that is rigorous and yet reader-friendly.

253 citations

Journal ArticleDOI
TL;DR: The goal is to show that it is possible to identify the author of a program by examining programming style characteristics, and to find a set of characteristics that remain constant for a significant portion of the programs that this programmer might produce.

185 citations

Journal ArticleDOI
TL;DR: This article presents a framework for combining constant propagation, value numbering, and unreachable-code elimination, and shows how to combine two such frameworks and how to reason about the properties of the resulting framework.
Abstract: Modern optimizing compilers use several passes over a program's intermediate representation to generate good code. Many of these optimizations exhibit a phase-ordering problem. Getting the best code may require iterating optimizations until a fixed point is reached. Combining these phases can lead to the discovery of more facts about the program, exposing more opportunities for optimization. This article presents a framework for describing optimizations. It shows how to combine two such frameworks and how to reason about the properties of the resulting framework. The structure of the frame work provides insight into when a combination yields better results. To make the ideas more concrete, this article presents a framework for combining constant propagation, value numbering, and unreachable-code elimination. It is an open question as to what other frameworks can be combined in this way.

173 citations

Proceedings ArticleDOI
26 Mar 2006
TL;DR: A set of compiler algorithms that meet high instruction-level parallelism with energy efficiency and speedup, including an algorithm that assigns load and store identifiers to maximize the number of loads and stores within a block are described.
Abstract: Explicit data graph execution (EDGE) architectures offer the possibility of high instruction-level parallelism with energy efficiency. In EDGE architectures, the compiler breaks a program into a sequence of structured blocks that the hardware executes atomically. The instructions within each block communicate directly, instead of communicating through shared registers. The TRIPS EDGE architecture imposes restrictions on its blocks to simplify the microarchitecture: each TRIPS block has at most 128 instructions, issues at most 32 loads and/or stores, and executes at most 32 register bank reads and 32 writes. To detect block completion, each TRIPS block must produce a constant number of outputs (stores and register writes) and a branch decision. The goal of the TRIPS compiler is to produce TRIPS blocks full of useful instructions while enforcing these constraints. This paper describes a set of compiler algorithms that meet these sometimes conflicting goals, including an algorithm that assigns load and store identifiers to maximize the number of loads and stores within a block. We demonstrate the correctness of these algorithms in simulation on SPEC2000, EEMBC, and microbenchmarks extracted from SPEC2000 and others. We measure speedup in cycles over an Alpha 21264 on microbenchmarks.

159 citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
20231,837
20224,024
2021194
202075
201931