scispace - formally typeset
Search or ask a question

Showing papers on "Static program analysis published in 1994"


Journal ArticleDOI
TL;DR: SCRUPLE is described, a finite state machine-based source code search tool that efficiently implements a framework in which pattern languages are used to specify interesting code features, derived by extending the source programming language with pattern-matching symbols.
Abstract: For maintainers involved in understanding and reengineering large software, locating source code fragments that match certain patterns is a critical task. Existing solutions to the problem are few, and they either involve manual, painstaking scans of the source code using tools based on regular expressions, or the use of large, integrated software engineering environments that include simple pattern-based query processors in their toolkits. We present a framework in which pattern languages are used to specify interesting code features. The pattern languages are derived by extending the source programming language with pattern-matching symbols. We describe SCRUPLE, a finite state machine-based source code search tool, that efficiently implements this framework. We also present experimental performance results obtained from a SCRUPLE prototype, and the user interface of a source code browser built on top of SCRUPLE. >

235 citations


Patent
01 Mar 1994
TL;DR: In this article, a program for monitoring computer system performance includes a collection of source code modules in the form of a high level language, each of which is compiled into a corresponding object code module.
Abstract: A program for monitoring computer system performance includes a collection of source code modules in the form of a high level language. Each of the source code modules is compiled into a corresponding object code module. The object code modules are translated into a single linked code module in the form of a machine independent register transfer language. The linked code module is partitioned into basic program components. The basic program components include procedures, basic blocks within procedures, and instructions within basic blocks. Fundamental instrumentation routines identify, locate, and modify specific program components to be monitored. The modified linked code module is converted to machine executable code to be executed in the computer system so that performance data can be collected while the program is executing in the computer.

224 citations


Proceedings ArticleDOI
19 Sep 1994
TL;DR: A prototype tool for determining collections of files sharing a large amount of text has been developed and applied to a 40 megabyte source tree containing two releases of the gcc compiler.
Abstract: Legacy systems pose problems to maintainers that can be solved partially with effective tools. A prototype tool for determining collections of files sharing a large amount of text has been developed and applied to a 40 megabyte source tree containing two releases of the gcc compiler. Similarities in source code and documentation corresponding to software cloning, movement and inertia between releases, as well as the effects of preprocessing easily stand out in a way that immediately conveys nonobvious structural information to a maintainer taking responsibility for such a system. >

212 citations


Proceedings ArticleDOI
01 Feb 1994
TL;DR: The purpose of this paper is to propose conceptual and software support for the design of abstract domains and contains two main contributions: the notion of open product and ageneric pattern domain.
Abstract: interpretation [7] is a systematic methodology to design static program analysis which has been studied extensively in the logic programming community, because of the potential for optimizations in logic programming compilers and the sophistication of the analyses which require conceptual support. With the emergence of efficient generic abstract interpretation algorithms for logic programming, the main burden in building an analysis is the abstract domain which gives a safe approximation of the concrete domain of computation. However, accurate abstract domains for logic programming are often complex because of the variety of analyses to perform their interdependence, and the need to maintain structural information. The purpose of this paper is to propose conceptual and software support for the design of abstract domains. It contains two main contributions: the notion of open product and a generic pattern domain. The open product is a new way of combining abstract domains allowing each combined domain to benefit from information from the other components through the notions of queries and open operations. The open product is general-purpose and can be used for other programming paradigms as well. The generic pattern domain Pat (R)automatically upgrades a domain D with structural information yielding a more accurate domain Pat (D) without additional design or implementation cost. The two contributions are orthogonal and can be combined in various ways to obtain sophisticated domains while imposing minimal requirements on the designer. Both contributions are characterized theoretically and experimentally and were used to design very complex abstract domains such as PAT(OProp⊗OMode⊗OPS) which would be very difficult to design otherwise. On this last domain, designers need only contribute about 20% (about 3,400 lines) of the complete system (about 17,700 lines).

68 citations


15 Dec 1994
TL;DR: This thesis proposes an approach to extract implementations of abstractions in code maintenance, and applies the approach to develop an object extraction tool for use with FORTRAN code.
Abstract: There is considerable interest in reengineering existing software into object-oriented systems. One of the tasks in such reengineering efforts is to extract the embedded classes and objects. This thesis proposes an approach to extract implementations of abstractions in code maintenance, and applies the approach to develop an object extraction tool for use with FORTRAN code. Reported proposals and implementations for object extraction are summarized. A scheme for automatic class and object extraction is presented. This approach identifies instance variables and methods of classes from a system coded in an imperative language, such as FORTRAN, and reproduces them in a class-based or object-oriented language, such as C++. Prospective instance variables and objects are identified from constructs such as labeled common blocks or formal parameters. Methods are extracted by analyzing how the instance variables were used in the program. Classes are formed by combining the instance variables and the methods. A prototype extractor was implemented. Given a FORTRAN program, the prototype automatically produces a list of potentially useful classes and objects in C++. Experiments were performed to validate the object extractor. From 948 lines of FORTRAN code using some textbook data structures, the program extracted 623 lines of code. Close to half of the lines extracted corresponded to text book methods. The other half were extraneous methods. Of the correct methods produced, only 5 lines of code were missing. A closely related problem to object extraction is assembling the extracted code to form the original application. A study with the reengineering process at the National Micropopulation Simulation Resource indicated that when a new paradigm is adopted, the code may be substantially changed. These changes make the object extractor useful only in the early stages of a reengineering task. Nevertheless, an object extractor is still a useful tool for reengineering applications, since it extracts objects without much human effort, and it helps in program understanding and code analysis.

56 citations


Proceedings ArticleDOI
Ash1, Alderete1, Yao1, Oman, Lowtber 
01 Sep 1994
TL;DR: In this article, the authors describe mechanisms for automated software maintainability assessment and apply those techniques to industrial software systems and demonstrate how a metrics driven maintenance process can be used to prevent code degradation.
Abstract: Useful software usually changes over time. As code changes-for what ever reason-maintenance engineers most often attempt to keep the quality of it from degrading. But without means of measuring the quality or maintainability of the code, their efforts are dependent upon their expert knowledge to decide if the code is in "good" shape or if it needs reengineering. This paper describes mechanisms for automated software maintainability assessment and applies those techniques to industrial software systems. The results demonstrate how a metrics driven maintenance process can be used to prevent code degradation. >

35 citations


Journal ArticleDOI
TL;DR: This paper presents an algebraic framework (Source Code Algebra or SCA) that forms the basis of the source code query system and presents the SCA’s data model and operators and shows that a variety of source code queries can be easily expressed using them.
Abstract: Querying source code interactively for information is a critical task in reverse engineering of software. However, current source code query systems succeed in handling only small subsets of the wide range of queries possible on code, trading generality and expressive power for ease of implementation and practicality. We attribute this to the absence of clean formalisms for modeling and querying source code. In this paper, we present an algebraic framework (Source Code Algebra or SCA) that forms the basis of our source code query system. The benefits of using SCA include the integration of structural and flow information into a single source code data model, the ability to process high-level source code queries (command-line, graphical, relational, or pattern-based) by expressing them as equivalent SCA expressions, the use of SCA itself as a powerful low-level source code query language, and opportunities for query optimization. We present the SCA’s data model and operators and show that a variety of source code queries can be easily expressed using them. An algebraic model of source code addresses the issues of conceptual integrity, expressive power, and performance of a source code query system within a unified framework.

34 citations


Proceedings ArticleDOI
Kinloch1, Munro1
01 Sep 1994
TL;DR: The Combined C Graph (CCG) is described, a fine-grained intermediate representation for programs written in the C language from which program slices, call graph, flow-sensitive data flow, definition-use and control dependence views can be easily constructed.
Abstract: The process of program comprehension is often aided by the use of static analysis tools to provide a maintainer with different views of the code. Each view however often requires a different intermediate program representation, leading to redundancies and repetition of information. A solution is to develop a single intermediate representation which contains sufficient information to construct each program view. This paper describes the Combined C Graph (CCG), a fine-grained intermediate representation for programs written in the C language from which program slices, call graph, flow-sensitive data flow, definition-use and control dependence views can be easily constructed. The CCG allows the representation of embedded side effects and control flows and value returning functions with value parameters. The effects of pointer parameters are also modelled. Construction of the CCG makes use of the PERPLEX C analysis tool which produces a generic Prolog fact base representation of the source code. Existing data flow analysis techniques are extended to allow the computation of flow-sensitive data flow analysis information. >

31 citations


Patent
Jerry Walter Malcolm1
06 Sep 1994
TL;DR: An apparatus for producing object code from source code including input means for receiving the source code, the source codes including executable source code and source code documentation, and compilation means, coupled to the input means, including first means for providing object code, and second mean for providing documentation including selected portions of the executable source codes and the code documentation as discussed by the authors, and for organizing the provided documentation into a predefined format independent of executable code organization.
Abstract: An apparatus for producing object code from source code including input means for receiving the source code, the source code including executable source code and source code documentation, and compilation means, coupled to the input means, including first means for providing object code from the source code, and second means for providing documentation including selected portions of the executable source code and the source code documentation, and for organizing the provided documentation into a predefined format independent of executable source code organization. In addition, a method of for producing object code from source code including the steps of receiving the source code, the source code including executable source code and source code documentation, and compiling the received source code including the steps of providing object code from the source code, and providing documentation including selected portions of the executable source code and the source code documentation, and organizing the provided documentation into a predefined format independent of executable source code organization.

26 citations


Proceedings ArticleDOI
14 Nov 1994
TL;DR: This paper describes an approach to object-oriented code understanding that focuses largely on informal linguistic aspects of code, such as comments and identifiers.
Abstract: Object-oriented code is considered to be inherently more reusable than functional decomposition code; however, object-oriented code can suffer from a program understanding standpoint since good object-oriented style seems to require a large number of small methods. Hence code for a particular task may be scattered widely. Thus good semantics based tools are necessary. This paper describes an approach to object-oriented code understanding that focuses largely on informal linguistic aspects of code, such as comments and identifiers. >

22 citations


Journal ArticleDOI
TL;DR: The RE-Analyzer is an automated, reverse engineering system providing a high level of integration with a computer-aided software engineering (CASE) tool, where legacy code is transformed into abstractions within a structured analysis methodology.
Abstract: The RE-Analyzer is an automated, reverse engineering system providing a high level of integration with a computer-aided software engineering (CASE) tool. Specifically, legacy code is transformed into abstractions within a structured analysis methodology. The abstractions are based on data flow diagrams, state transition diagrams, and entity-relationship data models. Since the resulting abstractions can be browsed and modified within a CASE tool environment, a broad range of software engineering activities are supported, including program understanding, reengineering, and redocumentation. In addition, diagram complexity is reduced through the application of control partitioning: algorithmic technique for managing complexity by partitioning source code modules into smaller yet semantically coherent units. This approach also preserves the information content of the original source code. It is in contrast to other reverse engineering techniques that produce only structure charts and thus suffer from loss of information, unmanaged complexity, and a lack of correspondence to structured analysis abstractions. The RE-Analyzer has been implemented and currently supports the reverse engineering of software written in the C language. It has been integrated with a CASE tool based on the VIEWS method.

Proceedings ArticleDOI
01 Sep 1994
TL;DR: Methods that use markup languages such as SGML to embed information about the syntax and semantics of a program in the program code are described, and it is shown how these can be used to enhance its presentation style.
Abstract: Reading and understanding programs is a key activity in software reengineering, development, and maintenance. The ability of people to understand programs is directly related to the ease with which the source code and documentation can be read. Thus, enhancements to the style of presentation should heighten this comprehensibility. We describe methods that use markup languages such as SGML to embed information about the syntax and semantics of a program in the program code, and then show how these can be used to enhance its presentation style. We also briefly discuss the extension of these markup language concepts to text databases, and indicate how they can support various structural views of the code through browsing techniques associated with database queries. >

Journal ArticleDOI
TL;DR: A new cognitive approach to system (re)engineering based on code comprehension tools that provide a visual representation of code containing less cognitive noise that better enables programmers to understand system design.
Abstract: Describes a code conversion tool that helps programmers visualize and understand system design. The author first reviews current software reengineering tools and then describe a new cognitive approach to system (re)engineering based on code comprehension tools that provide a visual representation of code containing less cognitive noise. This better enables programmers to understand system design. The approach integrates code comprehension tools with current reengineering methodologies to create an integrated reengineering workbench for converting legacy code into newer languages such as Ada or C/C++. >

Proceedings Article
31 Oct 1994
TL;DR: Important "bullets of measure" that should be taken into consideration during and after the development of an Object-Oriented System are discussed, particularly as it pertains to the static analysis of OO source code.
Abstract: Object-Oriented Analysis and Design (OOAD) techniques appear to be at the forefront of software engineering technologies. Nevertheless, as with the introduction of any relatively new technique, there is a tendency for people to attempt to maximize efficiency without always having a corresponding factual basis for their actions. This paper discusses important "bullets of measure" that should be taken into consideration during and after the development of an Object-Oriented System, particularly as it pertains to the static analysis of OO source code. The proposed metrics are consistent with the suggestions of many individuals who are well known for their experience.

Dissertation
01 Jan 1994
TL;DR: The method has indicated that acquiring a data design from existing data intensive program code by program transformation with human assistance is an effective method in software maintenance.
Abstract: The problem area addressed in this thesis is extraction of a data design from existing data intensive program code The purpose of this is to help a software maintainer to understand a software system more easily because a view of a software system at a high abstraction level can be obtained Acquiring a data design from existing data intensive program code is an important part of reverse engineering in software maintenance A large proportion of software systems currently needing maintenance is data intensive The research results in this thesis can be directly used in a reverse engineering tool A method has been developed for acquiring data designs from existing data intensive programs, COBOL programs in particular Program transformation is used as the main tool Abstraction techniques and the method of crossing levels of abstraction are also studied for acquiring data designs A prototype system has been implemented based on the method developed This involved implementing a number of program transformations for data abstraction, and thus contributing to the production of a tool Several case studies, including one case study using a real program with 7000 Hues of source code, are presented The experiment results show that the Entity-Relationship Attribute Diagrams derived from the prototype can represent the data designs of the original data intensive programs The original contribution of the thesis is that the approach presented in this thesis can identify and extract data relationships from the existing code by combining analysis of data with analysis of code The approach is believed to be able to provide better capabilities than other work in the field The method has indicated that acquiring a data design from existing data intensive program code by program transformation with human assistance is an effective method in software maintenance Future work is suggested at the end of the thesis including extending the method to build an industrial strength tool

03 Jun 1994
TL;DR: Work under way is described to build a system which can generate the data flow information to support code reuse and other software development and maintenance activities and an interesting observation is that forward ripples require less average computation than backward ripples.
Abstract: Ripple analysis, a form of program slicing, can be used to identify those parts of an existing system which provide specific functionality, thus supporting code reuse and other software development/maintenance activities If a functional requirement is completely met by an identified set of input and output statements, forward ripple analysis for all input statements combined with backward ripple analysis for all output statements can identify all parts of the code related to that functionality This paper describes a prototype static code analysis system for Pascal code that can be used to identify bi-directional ripples (based on data flow analysis) This system utilizes data flow analysis techniques to collect flow insensitive information about the variables in each source statement and build a database containing a call, control flow, and dead graph This data base can then be used to identify forward and backward ripples of all appropriate input and output statements and thus the subset of the system which provides the specified functionality In this paper we describe work under way to build a system which can generate the data flow information to support code reuse and other software development and maintenance activities An interesting observation is that forward ripples require less average computation than backward ripples A ripple algorithm that performs a graph traversal to identify reverse an forward side-effects for a given variable on a given source line is described

01 Jan 1994
TL;DR: It has been observed that this code is robust: it has solved a variety of problems from different starting points, however, the code is inefficient in that it takes considerable CPU time as compared with certain other available codes.
Abstract: The theory and user instructions for an optimization code based on the method of feasible directions are presented. The code was written for wide distribution and ease of attachment to other simulation software. Although the theory of the method of feasible direction was developed in the 1960's, many considerations are involved in its actual implementation as a computer code. Included in the code are a number of features to improve robustness in optimization. The search direction is obtained by solving a quadratic program using an interior method based on Karmarkar's algorithm. The theory is discussed focusing on the important and often overlooked role played by the various parameters guiding the iterations within the program. Also discussed is a robust approach for handling infeasible starting points. The code was validated by solving a variety of structural optimization test problems that have known solutions obtained by other optimization codes. It has been observed that this code is robust: it has solved a variety of problems from different starting points. However, the code is inefficient in that it takes considerable CPU time as compared with certain other available codes. Further work is required to improve its efficiency while retaining its robustness.

Journal ArticleDOI
TL;DR: The techniques have been applied in practice to a wide range of source programs and analysis problems, including assessment problems, and meeting some of the integrity requirements for verification tools given in ‘The procurement of safety critical software in defence equipment’ (MoD, 1991).
Abstract: This paper describes an approach to the semantic analysis of procedural code. The techniques differ from those adopted in current static analysis tools such as MALPAS (Bramson, 1984) and SPADE (Clutterbuck and Carre, 1988) in two key respects: (1) A database is used, together with language-specific and language-independent data models, as a repository for all information about a program or set of programs which is required for analysis, and for storing and interrelating the results of analyses; (2) The techniques aim to treat the full language under consideration by a process of successive transformation and abstraction from the source code until a representation is obtained which is amenable to analysis. This abstraction process can include the production of formal specifications from code. The techniques have been partially implemented for the OS/VS IBM diallect of COBOL '74 and for FORTRAN '77. Several components of the resulting toolset have been formally specified in Z, thus meeting some of the integrity requirements for verification tools given in ‘The procurement of safety critical software in defence equipment’ (MoD, 1991). The techniques have been applied in practice to a wide range of source programs and analysis problems (Lano and Haughton, 1993b; Lano, et al., 1991), including assessment problems (Lloyd's Register, 1992, 1993; Hornsby and Eldridge, 1990). Section 1 gives an overview of the analysis process. Section 2 describes the representations used to support the process. Section 3 describes some of the techniques involved, and Section 4 gives examples of applications of the process. The Appendix contains extracts from a large case study carried out using tools developed to support the process.

Journal ArticleDOI
TL;DR: This paper introduces this class of back-end CASE tools, lists their capabilities, and describes how they can be used during the software-development process to increase overall software quality.
Abstract: The upcoming standardization of the Ada Semantic Interface Specification (ASIS) makes possible the development of portable static analysis tools for Ada programs. This paper introduces this class of back-end CASE tools, lists their capabilities, and describes how they can be used during the software-development process to increase overall software quality. The description is based on one such tool: the Ada Analyzer developed by Little Tree Consulting.

03 Jun 1994
TL;DR: An effort underway at Old Dominion University to determine if some code analysis techniques long used in the compiler development community can be useful during code maintenance, including ripple analysis, which identifies possible side effects which result from modification of a program statement.
Abstract: With few exceptions, software that is used is changed, often many time. These changes are frequently made by individuals who do not have time to fully understand the code they are changing. This results in code that --over time-- contains many segments that no longer contribute to required functionality. We describe an effort underway at Old Dominion University to determine if some code analysis techniques long used in the compiler development community can be useful during code maintenance. An example is the identification of useless code: code which may execute by which cannot effect program output. Several components of a "proof-of-concept" system have been completed. These include a Pascal parse tree generator, a Pascal parse tree to source code transformer, two control flow graph generators, and a "ripple" analysis tool. Informally, ripple analysis identifies possible side effects which result from modification of a program statement. This is one step in identifying useless code. In addition to these Pascal-based tools, a COBOL parse tree generator exists for a significant subset of COBOL. Ripple analysis is typical of a class of static code analysis techniques which could be useful during software development, validation, and maintenance. But two key issues must be better understood about these kinds of analyses: their speed and utility when dealing with "real" systems. Some existing algorithms have worst case run-time complexity, and we are most interested in programs where n is O (1,000,000). It remains to be demonstrated that these algorithms are feasible to use with large systems because of their run-time behavior. Secondly we must demonstrate that the information provided is worth the effort to produce it. This is what we are now about.

01 Sep 1994
TL;DR: This work shows how to optimize parallel programs by changing blocking operations into non-blocking ones, performing code motion to increase the time for communication overlap, and caching remote values to eliminate some read accesses entirely.
Abstract: We present compiler optimization techniques for explicitly parallel programs that communicate through a shared address space. The source programs are written in a single program multiple data (SPMI) style, and our machine target is a multiprocessor with physically distributed memory and hardware or software support for a single address space. The source language involves normal read and write operations on the address space, which correspond either to local memory operations or to communications over an interconnect network. The remote operations result in high latencies, but much of the latency can be overlapped with local computation or initiation of further remote operations. Non-blocking memory operations allow this overlap to be expressed directly. However, overlap is difficult for programmers to do by hand; it can lead to subtle program errors, since the order in which operations complete is no longer obvious. Programmers writing explicitly parallel code expect reads and writes from a single thread to take effect in program order, a property called sequential consistency. The use of non-blocking memory operations might yield executions that violate sequential consistency. We provide a new algorithm for static program analysis to detect memory operations that can safely be made non-blocking. The analysis requires dependency information across and within threads, and builds on earlier work by Shasha and Snir. We improve their results by providing a more efficient algorithm for SPMD programs, and by improving the accuracy of the analysis through the use of synchronization information. Using the results of this analysis, we show how to optimize parallel programs by changing blocking operations into non-blocking ones, performing code motion to increase the time for communication overlap, and caching remote values to eliminate some read accesses entirely. We show the potential payoff from each of our optimizations on real applications, using hand-transformed programs. The experiments are done on a CM-5 multiprocessor using the Split-C runtime system, which provides a software implementation of a global address space and both blocking and non-blocking memory operations.

Journal ArticleDOI
TL;DR: This work uses a simple method that is based on sound software engineering practice to synthesise programs by duplicating human methods for constructing programs, such as top-down design.
Abstract: The past study of program synthesis has mainly concentrated on attempting to synthesise programs by duplicating human methods for constructing programs, such as top-down design. Here we do not attempt this process but instead use a simple method that is based on sound software engineering practice. Knuth-Bendix completion is used in the synthesis process but without the need for the exhaustive completion of program axioms against each other. A software engineering framework is used to reduce the pairs of completed program axioms to the optimum for synthesising the required program. Examples of program synthesis are given and contrasted with an ad hoc method of synthesis.

03 Jun 1994
TL;DR: This paper describes a static code analysis program that allows software maintainers to monitor potential bi-directional ripple side-effects caused by modifications to Pascal source and presents data from analyses of actual code.
Abstract: Ripple analysis can be used to aid in the understanding of unfamiliar code and in debugging. This paper describes a static code analysis program that allows software maintainers to monitor potential bi-directional ripple side-effects caused by modifications to Pascal source and presents data from analyses of actual code. This program utilizes data flow analysis techniques to collect flow insensitive information about the variables in each source statement and build a database containing a call, control flow, and dead graph. An interesting observation is that forward ripples require less computation on the average than backward ripples. A ripple algorithm that performs a graph traversal to identify reverse and forward side-effects for a given variable on a given source line is described.

Journal ArticleDOI
TL;DR: This paper identifies several anomalies that can exist in source code, provides a means of identifying the anomalies, and suggests methods to eliminate or minimize the impact of the anomalies.
Abstract: One aspect of code quality is a design (or a set of source code) that contains only things that are relevant to the solution of the problem at hand. There are no redundant or superfluous entities. In a "good" (high quality) design everything has a purpose, and its purpose is clear. Code quality can be improved by eliminating redundant and unnecessary declarations. This paper provides a technique that can help improve code quality. It identifies several anomalies that can exist in source code, provides a means of identifying the anomalies, and suggests methods to eliminate or minimize the impact of the anomalies.

Journal ArticleDOI
TL;DR: Two tools are described which support the assessment of the software safety analysis at the beginning and at the end of the life cycle, including a front-end tool that turns formally specified systems into the dynamical and operational form of a Petri net and a back-end machine code representation of software.

Book
07 Dec 1994
TL;DR: Ada 9X: Run-time check elimination for Ada 9X, and Recommendations and proposals for an Ada strategy in the Space Software Development Environment.
Abstract: Opening address: Ada 9X.- Run-time check elimination for Ada 9X.- Adequacy of the new generation of multithreading operating systems to the Ada Tasking Model.- Merging Ada 9X and C++ in a graphics system software architecture.- The AECSS fault tolerant distributed Ada testbed and application.- A front-end to HOOD.- Tool support for high integrity Ada software.- Testing Ada abstract data types using formal specifications.- Formal methods for a space software development environment.- Object orientation is not always best!.- Beyond abstract data types: Giving life to objects.- Test methods and tools for SOHO Mass Memory Unit software.- Integrating modular, Object Oriented Programming, and application generator technologies in large real time and distributed developments.- A new approach for HOOD/Ada mapping.- Shlaer/Mellor or Rumbaugh? A discussion of two popular Object-Oriented Methods.- How should military Ada software be documented?.- Evolving an Ada curriculum to 9X.- Recommendations and proposals for an Ada strategy in the Space Software Development Environment.- Life ADA: An APSE integrating multiple compilers.- Extended application of Ada to cover ECBS with O4S.- Development of a lightweight object-based software process model under pragmatic constraints.- ESSPASE: European Space Software Product Assurance Support Environment.- Test philosophy and validation stategy of on-board real time software in envisat-1 satellite radar-altimeter.- A knowledge-based System for diagnosis in Veterinary Medicine.- Event diagnosis and recovery in real-time on-board autonomous mission control.- Safety aspects of the Ariane 5 on-board software.- Ada controls the European robotic arm.- Automatic generation of ada source code for the Rafale Mission computer.- The Real-time Rapporteur Group (ISO/JTC1/SC22/WG9/RRG) JTC 1.22.35 or How to avoid and control proliferation of new Ada Real time extensions.- A fully reusable class of objects for synchronization and communication in Ada 9X.- Interfacing computer communications from ada in a diverse and evolving environment.- Cost-benefit analysis for software-reuse - A decision procedure.- Ex2: Integrating Ada and extra support in a doubly portable extended executive designed for hard real time systems.- Distribution of tasks within a centrally scheduled local area network.- Handling interrupts in Ada 9X.- Tuning Ada programs in advance.- CEDEX A tool for the selection of a development and execution environment for real time on-bord applications.- Portability effort estimates for real time applications written in Ada through static code analysis.- FAA certification of Ada Run-Time Systems.- Panel on safety and reliability held on September 28, 1994.- Experiences integrating object-oriented analysis with Joint Application Development (JAD).

10 Nov 1994
TL;DR: This paper describes the use of static analysis tools for the re-engineering of software for a relay system for overhead power cables to demonstrate that the software was free of significant faults and to provide a software specification and design documentation that would allow the software to be maintained in the future.
Abstract: This paper describes the use of static analysis tools for the re-engineering of software for a relay system for overhead power cables. The system was originally written some fifteen years ago, in assembler, and has undergone considerable modification at various times in the past. The static analysis was undertaken for two reasons: to demonstrate that the software was free of significant faults, and to provide a software specification and design documentation that would allow the software to be maintained in the future.

15 Dec 1994
TL;DR: This work presents the design of a distributed computing environment that addresses the dual goals of ease of use and efficiency, and contributes an examination of optimizations as they apply to distributed object systems in general, and a set of specific optimizations for Diamonds.
Abstract: The design and implementation of efficient, easy to program, distributed systems remains one of the foremost challenges for software engineering today. Too often, users and system designers are forced to trade ease of programming for efficiency or vice versa. In this work we present the design of a distributed computing environment that addresses the dual goals of ease of use and efficiency. To make the system easier to program, we support a fine grain active object model. To make it more efficient, we map the active object model onto an architecture we have specifically designed to be amenable to a range of optimizations. Efficiency is also enhanced by the architectural framework of this system, which we have termed Diamonds, which allows the concurrency within an active object model to be exploited in an effective manner. Diamonds includes language processing facilities, static analysis tools, and a distributed runtime environment. We have applied a strong object oriented design philosophy to all aspects of our system. The language processing facilities use familiar tools in new ways to capture the essence of an object oriented language in an extensible, maintainable manner. The runtime environment is structured around a novel set of execution protocols that allow for varying numbers of heterogeneous processors to be used. To communicate between the constituents of our environment we have designed a unique set of abstractions that hide the location and message transmission details. Object oriented distributed systems like Diamonds are new territory in terms of optimization. We contribute an examination of optimizations as they apply to distributed object systems in general, and suggest a set of specific optimizations for Diamonds.

Proceedings ArticleDOI
21 Dec 1994
TL;DR: The paper introduces a tool that is designed to work on abstract representations, and directly manipulate them, capable of performing program transformations based on formal language theory and the abstract program representations.
Abstract: Large computer programs have to be maintained and hence understood by many different people most of whom are not their original authors. Such programs need to be evaluated and transformed into semantically equivalent but maintainable code. The paper introduces a tool that is designed to work on abstract representations, and directly manipulate them. The proposed tool is capable of performing program transformations based on formal language theory and the abstract program representations (introduced as abstract syntax by McCarthy (1962)). The definition of an abstract program representation is extended to mean a simple view of the program with respect to some program attributes that help us to concentrate on, clarify and simplify our manipulations.

Book ChapterDOI
26 Sep 1994
TL;DR: From applying the tool VAL, quantitative results for actual code can be obtained and compared with the amount of work necessary to port it in order to obtain a taxonomy of each of the coding rules.
Abstract: The estimate of the influence of the application of the coding rules on the portability has been determined by judicious guessing so far. From applying the tool VAL, quantitative results for actual code can be obtained and compared with the amount of work necessary to port it in order to obtain a taxonomy of each of the rules. This in turn offers the possibility to quantitatively estimate the portability effort through static code analysis.