scispace - formally typeset
Search or ask a question

Showing papers on "Program transformation published in 1998"


Proceedings ArticleDOI
01 May 1998
TL;DR: The design and implementation of a compiler that translates programs written in a type-safe subset of the C programming language into highly optimized DEC Alpha assembly language programs, and a certifier that automatically checks the type safety and memory safety of any assembly language program produced by the compiler are presented.
Abstract: This paper presents the design and implementation of a compiler that translates programs written in a type-safe subset of the C programming language into highly optimized DEC Alpha assembly language programs, and a certifier that automatically checks the type safety and memory safety of any assembly language program produced by the compiler. The result of the certifier is either a formal proof of type safety or a counterexample pointing to a potential violation of the type system by the target program. The ensemble of the compiler and the certifier is called a certifying compiler.Several advantages of certifying compilation over previous approaches can be claimed. The notion of a certifying compiler is significantly easier to employ than a formal compiler verification, in part because it is generally easier to verify the correctness of the result of a computation than to prove the correctness of the computation itself. Also, the approach can be applied even to highly optimizing compilers, as demonstrated by the fact that our compiler generates target code, for a range of realistic C programs, which is competitive with both the cc and gcc compilers with all optimizations enabled. The certifier also drastically improves the effectiveness of compiler testing because, for each test case, it statically signals compilation errors that might otherwise require many executions to detect. Finally, this approach is a practical way to produce the safety proofs for a Proof-Carrying Code system, and thus may be useful in a system for safe mobile code.

397 citations


Proceedings ArticleDOI
29 Sep 1998
TL;DR: An extended language in which the side-conditions and contextual rules that arise in realistic optimizer specifications can themselves be expressed as strategy-driven rewrites, and a low-level core language which has a clear semantics, can be implemented straightforwardly and can itself be optimized.
Abstract: We describe a language for defining term rewriting strategies, and its application to the production of program optimizers. Valid transformations on program terms can be described by a set of rewrite rules; rewriting strategies are used to describe when and how the various rules should be applied in order to obtain the desired optimization effects. Separating rules from strategies in this fashion makes it easier to reason about the behavior of the optimizer as a whole, compared to traditional monolithic optimizer implementations. We illustrate the expressiveness of our language by using it to describe a simple optimizer for an ML-like intermediate representation.The basic strategy language uses operators such as sequential composition, choice, and recursion to build transformers from a set of labeled unconditional rewrite rules. We also define an extended language in which the side-conditions and contextual rules that arise in realistic optimizer specifications can themselves be expressed as strategy-driven rewrites. We show that the features of the basic and extended languages can be expressed by breaking down the rewrite rules into their primitive building blocks, namely matching and building terms in variable binding environments. This gives us a low-level core language which has a clear semantics, can be implemented straightforwardly and can itself be optimized. The current implementation generates C code from a strategy specification.

280 citations


Proceedings Article
15 Jun 1998
TL;DR: Load-time transformation is described, a stage in the program development lifecycle in which classes are modified at load time according to user-supplied directives, which allows the users to select transformations that add new features, customize the implementation of existing features, and apply the changes to all classes in the environment.
Abstract: While the availability of platform-independent code on the Internet is increasing, third-party code rarely exhibits all of the features desired by end users. Unfortunately, developers cannot foresee and provide for all possible extensions. In this paper, we describe load-time transformation, a stage in the program development lifecycle in which classes are modified at load time according to user-supplied directives. This allows the users to select transformations that add new features, customize the implementation of existing features, and apply the changes to all classes in the environment. The Java Object Instrumentation Environment (JOIE) is a toolkit for constructing transformations of Java classes. An enhanced class loader calls user-supplied transformers that specify rules for transforming target classes. We describe some applications of load-time transformation, including extending the Java environment, integrating classes with specialized environments, and adding functionality directly to classes.

182 citations


Journal ArticleDOI
01 Sep 1998
TL;DR: This paper reports on the practical experience of the transformational approach to compilation, in the context of a substantial compiler.
Abstract: Many compilers do some of their work by means of correctness-preserving, and hopefully performance-improving, program transformations. The Glasgow Haskell Compiler (GHC) takes this idea of “compilation by transformation” as its war-cry, trying to express as much as possible of the compilation process in the form of program transformations. This paper reports on our practical experience of the transformational approach to compilation, in the context of a substantial compiler.

159 citations


Journal ArticleDOI
TL;DR: This article elaborate global control for partial deduction, using the concept of a characteristic tree, encapsulating specialization behavior rather than syntactic structure, to guide generalization and polyvariance, and shows how this can be done in a correct and elegant way.
Abstract: Given a program and some input data, partial deduction computes a specialized program handling any remaining input more efficiently. However, controlling the process well is a rather difficult problem. In this article, we elaborate global control for partial deduction: for which atoms, among possibly infinitely many, should specialized relations be produced, meanwhile guaranteeing correctness as well as termination? Our work is based on two ingredients. First, we use the concept of a characteristic tree, encapsulating specialization behavior rather than syntactic structure, to guide generalization and polyvariance, and we show how this can be done in a correct and elegant way. Second, we structure combinations of atoms and associated characteristic trees in global trees registering “causal” relationships among such pairs. This allows us to spot looming nontermination and perform proper generalization in order to avert the danger, without having to impose a depth bound on characteristic trees. The practical relevance and benefits of the work are illustrated through extensive experiments. Finally, a similar approach may improve upon current (on-line) control strategies for program transformation in general such as (positive) supercompilation of functional programs. It also seems valuable in the context of abstract interpretation to handle infinite domains of infinite height with more precision.

111 citations


Journal ArticleDOI
TL;DR: The cache-and-prune method presented in the article consists of three stages: the original program is extended to cache the results of all its intermediate subcomputations as well as the final result, the extended program is incrementalized so that computation on a new input can use all intermediate results on an old input.
Abstract: A systematic approach is given for deriving incremental programs that exploit caching. The cache-and-prune method presented in the article consists of three stages: (I) the original program is extended to cache the results of all its intermediate subcomputations as well as the final result, (II)) the extended program is incrementalized so that computation on a new input can use all intermediate results on an old input, and (III) unused results cached by the extended program and maintained by the incremental program are pruned away, leaving a pruned extended program that caches only useful intermediate results and a pruned incremental program that uses and maintains only useful results. All three stages utilize static analyses and semantics-preserving transformations. Stages I and III are simple, clean, and fully automatable. The overall method has a kind of optimality with respect to the techniques used in Stage II. The method can be applied straightfowardly to provide a systematic approach to program improvement via caching.

99 citations


Book ChapterDOI
TL;DR: A program transformation tech- nique is used, namely, partial evaluation, to automatically transform a DSL program into a compiled program, given only an interpreter.
Abstract: I m p l e m e n t a t i o n . The abstract machine is then given an implementation (typ- ically, a library), or possibly many, to account for different operational con- texts. The valuation function can be implemented as an interpreter based on an abstract machine implementation, or as a compiler to abstract machine instructions. P a r t i a l e v a l u a t i o n . While interpreting is more flexible, compiling is more effi- cient. To get the best of both worlds, we use a program transformation tech- nique, namely, partial evaluation, to automatically transform a DSL program into a compiled program, given only an interpreter. Each of the above methodology steps is further detailed in a separate section of this paper. 1.6 A W o r k i n g E x a m p l e To illustrate our approach, an example of DSL is used throughout the paper. We introduce a simple electronic mail processing application as a working example. Conceptually this application enables users to specify automatic treatments of incoming messages depending on their nature and contents: dispatching mes- sages to people or folders, filtering spam, offering a shell escape (e.g., to feed an electronic agenda), replying to messages when absent, etc. This example is inspired by a Unix program called s l o c a l which offers users a way of processing inbound mail. With s l o c a l , user-defined treatments are expressed in the form of rules. Each rule consists of a string to be searched in a message field (e.g., Subjec t , From) and an action to be performed if the string

96 citations


Proceedings Article
01 Jan 1998
TL;DR: This paper describes a general framework for formally underpinning the schema transformation process and illustrates the applicability of the framework by showing how to define a set of primitive transformations for an extended ER model and by defining some of the common schema transformations as sequences of these primitive transformations.
Abstract: Several methodologies for integrating database schemas have been proposed in the literature, using various common data models (CDMs). As part of these methodologies, transformations have been defined that map between schemas which are in some sense equivalent. This paper describes a general framework for formally underpinning the schema transformation process. Our formalism clearly identifies which transformations apply for any instance of the schema and which only for certain instances. We will illustrate the applicability of the framework by showing how to define a set of primitive transformations for an extended ER model and by defining some of the common schema transformations as sequences of these primitive transformations. The same approach could be used to formally define transformations on other CDMs.

81 citations


Book ChapterDOI
15 Jun 1998
TL;DR: Conditions under which the authors can modify the slack of a channel in a distributed computation without changing its behavior can be used to modify the degree of pipelining in an asynchronous system.
Abstract: We present conditions under which we can modify the slack of a channel in a distributed computation without changing its behavior. These results can be used to modify the degree of pipelining in an asynchronous system. The generality of the result shows the wide variety of pipelining alternatives presented to the designer of a concurrent system. We give examples of program transformations which can be used in the design of concurrent systems whose correctness depends on the conditions presented.

81 citations


Journal ArticleDOI
30 Oct 1998
TL;DR: In this paper, a general framework for formally underpinning the schema transformation process is described, which clearly identifies which transformations apply for any instance of the schema and which only for certain instances.
Abstract: Several methodologies for integrating database schemas have been proposed in the literature, using various common data models (CDMs). As part of these methodologies, transformations have been defined that map between schemas which are in some sense equivalent. This paper describes a general framework for formally underpinning the schema transformation process. Our formalism clearly identifies which transformations apply for any instance of the schema and which only for certain instances. We will illustrate the applicability of the framework by showing how to define a set of primitive transformations for an extended ER model and by defining some of the common schema transformations as sequences of these primitive transformations. The same approach could be used to formally define transformations on other CDMs.

80 citations


Proceedings ArticleDOI
01 May 1998
TL;DR: This approach allows systems design to begin with the flexibility of a general-purpose language, followed by gradual refinement into a more restricted form necessary for specification, and to end with a system specification possessing the properties of the formal model.
Abstract: Successive, formal refinement is a new approach for specification of embedded systems using a general-purpose programming language. Systems are formally modeled as abstractable synchronous reactive systems, and Java is used as the design input language. A policy of use is applied to Java, in the form of language usage restrictions and class-library extensions to ensure consistency with the formal model. A process of incremental, user-guided program transformation is used to refine a Java program until it is consistent with the policy of use. The final product is a system specification possessing the properties of the formal model, including deterministic behavior, bounded memory usage and bounded execution time. This approach allows systems design to begin with the flexibility of a general-purpose language, followed by gradual refinement into a more restricted form necessary for specification.

Journal Article
TL;DR: In this article, a general approach for automatic and accurate time-bound analysis is described, which consists of transformations for building timebound functions in the presence of partially known input structures, symbolic evaluation of the timebound function based on input parameters, optimizations to make the overall analysis efficient as well as accurate, and measurements of primitive parameters.
Abstract: This paper describes a general approach for automatic and accurate time-bound analysis. The approach consists of transformations for building time-bound functions in the presence of partially known input structures, symbolic evaluation of the time-bound function based on input parameters, optimizations to make the overall analysis efficient as well as accurate, and measurements of primitive parameters, all at the source-language level. We have implemented this approach and performed a number of experiments for analyzing Scheme programs. The measured worst-case times are closely bounded by the calculated bounds.

Book ChapterDOI
01 Jun 1998
TL;DR: The approach consists of transformations for building time-bound functions in the presence of partially known input structures, symbolic evaluation of the time- bound function based on input parameters, optimizations to make the overall analysis efficient as well as accurate, and measurements of primitive parameters, all at the source-language level.
Abstract: This paper describes a general approach for automatic and accurate time-bound analysis. The approach consists of transformations for building time-bound functions in the presence of partially known input structures, symbolic evaluation of the time-bound function based on input parameters, optimizations to make the overall analysis efficient as well as accurate, and measurements of primitive parameters, all at the source-language level. We have implemented this approach and performed a number of experiments for analyzing Scheme programs. The measured worst-case times are closely bounded by the calculated bounds.

Journal ArticleDOI
TL;DR: A collection of transformations that can dramatically reduce overhead in the common case (when the access is valid) while preserving the program state at the time of an exception while fully compliant with the Java language semantics are described.
Abstract: The JavaTM language specification requires that all array references be checked for validity. If a reference is invalid, an exception must be thrown. Furthermore, the environment at the time of the exception must be preserved and made available to whatever code handles the exception. Performing the checks at run time incurs a large penalty in execution time. In this paper we describe a collection of transformations that can dramatically reduce this overhead in the common case (when the access is valid) while preserving the program state at the time of an exception. The transformations allow trade-offs to be made in the efficiency and size of the resulting code, and are fully compliant with the Java language semantics. Preliminary evaluation of the effectiveness of these transformations shows that performance improvements of 10 times and more can be achieved for array-intensive Java programs.

01 Jan 1998
TL;DR: This thesis explores the theory and applications of modular monadic semantics, including: building blocks for individual programming features, equational reasoning with laws and axioms, modular proofs, program transformation, modular interpreters, and compiler construction.
Abstract: Modular monadic semantics is a high-level and modular form of denotational semantics. It is capable of capturing individual programming language features and their interactions. This thesis explores the theory and applications of modular monadic semantics, including: building blocks for individual programming features, equational reasoning with laws and axioms, modular proofs, program transformation, modular interpreters, and compiler construction. We will demonstrate that the modular monadic semantics framework makes programming languages easy to specify, reason about, and implement.

Book ChapterDOI
12 Sep 1998
TL;DR: A program that is easy to understand often fails to be efficient, while a more efficient solution often compromises clarity.
Abstract: When writing a program, especially in a high level language such as Haskell, the programmer is faced with a tension between abstraction and efficiency. A program that is easy to understand often fails to be efficient, while a more efficient solution often compromises clarity.

Book ChapterDOI
TL;DR: This paper gives an introduction to Turchin's supercompiler, a program transformer for functional programs which performs optimizations beyond partial evaluation and deforestation.
Abstract: This paper gives an introduction to Turchin's supercompiler, a program transformer for functional programs which performs optimizations beyond partial evaluation and deforestation. More precisely, the paper presents positive supercompilation.

Proceedings ArticleDOI
Zhong Shao1
29 Sep 1998
TL;DR: This paper exploits the semantic property of ML-style modules to support efficient cross-module compilation and presents a type-directed translation of the MacQueen-Tofte higher-order modules into a predicative variant of the polymorphic λ-calculus Fω.
Abstract: Higher-order modules are very effective in structuring large programs and defining generic, reusable software components. Unfortunately, many compilation techniques for the core languages do not work across the module boundaries. As a result, few optimizing compilers support these module facilities well.This paper exploits the semantic property of ML-style modules to support efficient cross-module compilation. More specifically, we present a type-directed translation of the MacQueen-Tofte higher-order modules into a predicative variant of the polymorphic λ-calculus Fω. Because modules can be compiled in the same way as ordinary polymorphic functions, standard type-based optimizations such as representation analysis immediately carry over to the module languages.We further show that the full-transparency property of the MacQueen-Tofte system yields a near optimal cross-module compilation framework. By propagating various static information through the module boundaries, many static program analyses for the core languages can be extended to work across higher-order modules.

Book ChapterDOI
TL;DR: These notes present basic principles of partial evaluation using the simple imperative language FCL (a language of flowcharts introduced by Jones and Gomard), and exercises include proving various properties about the systems using the operational semantics, and modifying and extending the implementations.
Abstract: These notes present basic principles of partial evaluation using the simple imperative language FCL (a language of flowcharts introduced by Jones and Gomard). Topics include online partial evaluators, offline partial evaluators, and binding-time analysis. The goal of the lectures is to give a rigorous presentation of the semantics of partial evaluation systems, while also providing details of actual implementations. Each partial evaluation system is specified by an operational semantics, and each is implemented in Scheme and Java. Exercises include proving various properties about the systems using the operational semantics, and modifying and extending the implementations.

01 Jan 1998
TL;DR: This document presents the functionalities of Odyssee through an illustrative example, and completely describes the command language of Odissee and its graphical user interface.
Abstract: Odyssee is an automatic differentiation processor for fortran code which implements both the forward and reverse mode of automatic differentiation. This document presents the functionalities of Odyssee through an illustrative example. It presents the basic concepts and objects of Odyssee, and completely describes the command language of Odyssee and its graphical user interface.

Journal ArticleDOI
TL;DR: In this paper, a program transformation strategy is presented that is able to reduce the buffer size and power consumption for a relatively large class of (pseudo)regular data-dominated signal processing algorithms.
Abstract: A program transformation strategy is presented that is able to reduce the buffer size and power consumption for a relatively large class of (pseudo)regular data-dominated signal processing algorithms. Our methodology is targeted toward an implementation on programmable processors, but most of the principles remain valid for a custom processor implementation. As power and area cost are crucial in the context of embedded multimedia applications, this strategy can be very valuable. The feasibility of our approach is demonstrated on a representative high-speed video processing algorithm for which we obtain a substantial reduction of the area and power consumption compared to the classical approaches.

Journal ArticleDOI
15 Jun 1998
TL;DR: In recent years increasing consensus has emerged that program transformers, e.g., partial evaluation and unfold/fold transformations, should terminate; a compiler should stop even if it performs fancy optimizations!
Abstract: In recent years increasing consensus has emerged that program transformers, e.g., partial evaluation and unfold/fold transformations, should terminate; a compiler should stop even if it performs fancy optimizations! A number of techniques to ensure termination of program transformers have been invented, but their correctness proofs are sometimes long and involved.

Proceedings ArticleDOI
23 Nov 1998
TL;DR: This paper presents an advanced macro system based on ideas borrowed from reflection that provides metaobjects as the data structure used for the macro processing, instead of an abstract syntax tree, which makes it easy to implement a range of transformations of object oriented programs.
Abstract: There are a number of programmable macro systems such as Lisp's. While they can handle complex program transformation, they still have difficulty in handling some kinds of transformation typical in object oriented programming. The paper examines this problem and, to address it, presents an advanced macro system based on ideas borrowed from reflection. Unlike other macro systems, our macro system provides metaobjects as the data structure used for the macro processing, instead of an abstract syntax tree. This feature makes it easy to implement a range of transformations of object oriented programs.

Book ChapterDOI
TL;DR: The proposed matching method is based on the construction of compact bipartite graphs, and is designed for working very efficiently on specific classes of AC patterns, and it is shown how to refine this algorithm to work in an eager way.
Abstract: We address the problem of term normalisation modulo associative-commutative (AC) theories, and describe several techniques for compiling many-to-one AC matching and reduced term construction. The proposed matching method is based on the construction of compact bipartite graphs, and is designed for working very efficiently on specific classes of AC patterns. We show how to refine this algorithm to work in an eager way. General patterns are handled through a program transformation process. Variable instantiation resulting from the matching phase and construction of the resulting term are also addressed. Our experimental results with the system ELAN provide strong evidence that compilation of many-to-one AC normalisation using the combination of these few techniques is crucial for improving the performance of algebraic programming languages.

Proceedings ArticleDOI
13 Jul 1998
TL;DR: This paper presents a new program transformation infrastructure, called TSF, for application developers, build on top of an existing Fortran engineering tool named FORESYS, and overview the related tools and the main features of the TSF transformation scripts.
Abstract: Code maintenance, tuning or parallelization on high performance computers are very time consuming. They usually involve many program transformations. These transformations range from very simple ones (i.e. changing array accesses) to very complex ones (i.e. parallelism and data locality optimizations [ll, 21). Furthermore these transformations are usually based on information extracted either statically (i.e. data flow and data dependence analysis [S]) or dynamically (i.e. via program instrumentation [5]). Most of the currently available tools, compiler, parallelizer, preprocessor, profilers, etc. cover part of the needs but they suffer from a major drawback: they cannot be easily extended. Extensibility allows the user (i.e. an application developer in our case) to add new program transformations specific to his application context. However adding new features must be possible without having an intimate knowledge of the tool implementation nor having to modify the source code or to link to it. In this paper we present a new program transformation infrastructure, called TSF, for application developers. This infrastructure is build on top of an existing Fortran engineering tool named FORESYS [23]. In the following we overview the related tools and the main features of the TSF transformation scripts. We conclude this section by an overview of the paper.

Journal ArticleDOI
01 Sep 1998
TL;DR: In this article, a type soundness theorem for Polymorphic C has been proved, and a transition semantics has been proposed to model program execution in terms of transformations of partial derivation trees.
Abstract: Advanced polymorphic type systems have come to play an important role in the world of functional programming. But, so far, these type systems have had little impact upon widely used imperative programming languages like C and C++. We show that ML-style polymorphism can be integrated smoothly into a dialect of C, which we call Polymorphic C. It has the same pointer operations as C, including the address-of operator &, the dereferencing operator ∗, and pointer arithmetic. We give a natural semantics for Polymorphic C, and prove a type soundness theorem that gives a rigorous and useful characterization of what can go wrong when a well-typed Polymorphic C program is executed. For example, a well-typed Polymorphic C program may fail to terminate, or it may abort due to a dangling pointer error. Proving such a type soundness theorem requires a notion of an attempted program execution; we show that a natural semantics gives rise quite naturally to a transition semantics, which we call a natural transition semantics, that models program execution in terms of transformations of partial derivation trees. This technique should be generally useful in proving type soundness theorems for languages defined using natural semantics.

Book ChapterDOI
TL;DR: This work presents a novel method for staging static analyses using abstraction-based program specialization (ABPS) and gives an ABPS system that serves as a formal foundation for a suite of analysis and verification tools that are developing for Ada programs.
Abstract: Conventional partial evaluators specialize programs with respect to concrete values, but programs can also be specialized with respect to abstractions of concrete values. We present a novel method for staging static analyses using abstraction-based program specialization (ABPS). Building on earlier work by Consel and Khoo and Jones, we give an ABPS system that serves as a formal foundation for a suite of analysis and verification tools that we are developing for Ada programs. Our tool set makes use of existing verification packages. Currently many programs must be hand-transformed before they can be submitted to these packages. We have determined that these hand-transformations can be carried out automatically using ABPS. Thus, preprocessing programs using ABPS can significantly extend the applicability of existing tools without modifying the tools themselves.

Book ChapterDOI
26 Oct 1998
TL;DR: The proposed framework does not wish to limit the programming language or the language of assertions unnecessarily in order to make the assertions statically decidable, but the proposed framework needs to deal throughout with approximations.
Abstract: As constraint logic programming matures and larger applications are built, an increased need arises for advanced development and debugging environments. Assertions are linguistic constructions which allow expressing properties of programs. Classical examples of assertions are type declarations. However, herein we are interested in supporting a more general setting [3, 1] in which, on one hand assertions can be of a more general nature, including properties which are statically undecidable, and, on the other, only a small number of assertions may be present in the program, i.e., the assertions are optional. In particular, we do not wish to limit the programming language or the language of assertions unnecessarily in order to make the assertions statically decidable. Consequently, the proposed framework needs to deal throughout with approximations [2].

Journal Article
TL;DR: In order to ensure constancy of purpose with the university and college mission statements, the Master's of Project Management (MPM) program faculty re-examined those mission statements and developed a new mission statement for the program as discussed by the authors.
Abstract: Everyone knows the rules in higher education are changing. Today's university is no longer restricted to a specific time or place. An institution must apply today's technology to its curriculum and programs to meet the customer's needs, to compete with other institutions and possibly even to survive. How does one make a successful transformation from a traditional on-campus graduate program to one that is universally accessible? Save yourself time and effort: find out what others are doing -- what worked for them and what did not -- and decide what may work for your program. This paper describes how Western Carolina University met this challenge and the lessons we learned. Our process contains the following four steps: * Evaluate your current mission, customer needs and program to determine your goals; * Form a cross-disciplinary team; * Develop a program structure; and * Implement continuous improvement techniques. Step One Evaluate your current mission, customer needs and program. What are they now, what are the new needs and demands? Based on this information, what are your goals? The Mission: In order to ensure constancy of purpose with the university and college mission statements, the Master's of Project Management (MPM) program faculty re-examined those mission statements and developed a new mission statement for the program. The new mission statement will serve as our guide throughout the current program transformation and in future decision-making processes as the program is continuously evaluated and tailored to meet customer needs. Customer Needs: The demand for Project Management Professional (PMP) certification from the internationally recognized Project Management Institute (PMI) is global and rapidly increasing. The PMP certification for program/project managers is mandated by various organizations worldwide including the U.S. Department of Defense and the Department of Energy. A survey was conducted to determine the educational/training needs of business and industry. The results indicated that many industries have hired consultants and invested in other training mechanisms to deliver the necessary information for their employees to successfully complete the PMP exam. While many institutions offer "certification" at the end of their course work, our survey indicated the market participants would prefer a complete graduate degree as a result of their efforts rather than partial credit toward a graduate degree or a certificate of training. As a result of our market survey, the major goal was to transition the current on-campus Master's of Project Management degree program into an asynchronously delivered, comprehensive, fully accredited, customer-centered and curriculum-driven program for delivery over the World Wide Web. The Program: The current traditional classroom graduate program is based in Western Carolina University's College of Business, which is fully accredited by the International Association of Management Education, AACSB. It was the first Project Management Institute (PMI) accredited degree program in the United States offered in a fully accredited institution. The College of Business has offered an on-campus MPM degree since the mid-1980s; however, enrollment in this program has been relatively low compared to other Master's-level business programs at the university. This is due in part to the fact that a prospective student wishing to pursue this specialized degree -- which requires a one calendar year, on-campus commitment -- usually has been a full-time employee in business or industry, has family obligations and lives outside of a reasonable commuting distance from the WCU campus. Step Two Develop a cross-disciplinary team. Due to the public demand for this type of graduate degree, the MPM degree program was selected to be the first university-supported Internet distance learning effort. …

Proceedings ArticleDOI
14 May 1998
TL;DR: It is proved that any sound liveness analysis induces a correct program transformation, and set constraints for an interprocedural update optimization that runs in polynomial time are presented.
Abstract: Destructive array update optimization is critical for writing scientific codes in functional languages. We present set constraints for an interprocedural update optimization that runs in polynomial time. This is a multi pass optimization, involving interprocedural flow analyses for aliasing and liveness. We characterize the soundness of these analyses using small step operational semantics. We have also proved that any sound liveness analysis induces a correct program transformation.