scispace - formally typeset
Search or ask a question

Showing papers in "Information & Software Technology in 1994"


Journal ArticleDOI
TL;DR: Results of this analysis support the hypothesis of significant non-linearities, and the existence of both economies and diseconomies of scale in software development.
Abstract: Researchers and practitioners have found it useful for cost estimation and productivity evaluation purposes to think of software development as an economic production process, whereby inputs, most notably the effort of systems development professionals, are converted into outputs (systems deliverables), often measured as the size of the delivered system. One central issue in developing such models is how to describe the production relationship between the inputs and outputs. In particular, there has been much discussion about the existence of either increasing or decreasing returns to scale. The presence or absence of scale economies at a given size are important to commercial practice in that they influence productivity. A project manager can use this knowledge to scale future projects so as to maximize the productivity of software development effort. The question of whether the software development production process should be modelled with a non-linear model is the subject of some recent controversy. This paper examines the issue of non-linearities through the analysis of 11 datasets using, in addition to standard parametric tests, new statistical tests with the non-parametric Data Envelopment Analysis (DEA) methodology. Results of this analysis support the hypothesis of significant non-linearities, and the existence of both economies and diseconomies of scale in software development.

140 citations


Journal ArticleDOI
TL;DR: Snoop is summarized—an expressive event specification language which supports temporal, explicit, and composite events in addition to the traditional database events, and introduces an event interface to extend the conventional object semantics to include the specification of primitive events.
Abstract: This paper describes the design of event-condition-action (ECA) rules for supporting active capability in an object-oriented DBMS and sketches its incorporation in a C++ environment. We summarize Snoop—an expressive event specification language which supports temporal, explicit, and composite events in addition to the traditional database events. A small set of event operators is introduced for building composite events. We also introduce an event interface to extend the conventional object semantics to include the specification of primitive events. This interface is used as a basis for the specification of events spanning objects, possibly from different classes, and detection of primitive and composite events. New rules can be introduced on existing objects, enabling objects to react to their own changes as well as to the changes of other objects. We introduce a runtime subscription mechanism between rules and objects to monitor objects selectively. This mechanism elegantly supports class level as well as instance level rules. Finally, using an illustrative example, we compare the functionality of our approach with related work.

82 citations


Journal ArticleDOI
TL;DR: The issues associated with attaching application-dependent semantics to objects which occur visually (or otherwise) within video sequences are explored and an extension of the anchor concept common in hypermedia models towards a more general mechanism, applicable to a wide range of media is described.
Abstract: The increasing use of multimedia systems has led to a need for retrieval mechanisms not just for static media (such as text and graphics) but also for the ‘new’ dynamic media (time-variant media such as audio and video). All these media are indexable by physical attributes, but as yet generally only text and graphics are indexable to a further level of granularity by content . This paper explores the issues associated with attaching application-dependent semantics to objects which occur visually (or otherwise) within video sequences. The principles discussed here make particular reference to video, but they should be general enough to be applicable to other dynamic media such as audio, or even smell and tactility. Thus, they describe an extension of the anchor concept common in hypermedia models towards a more general mechanism, applicable to a wide range of media. The paper first describes a general concept for accessing the contents of dynamic media by considering two points of view—those of hypermedia and information retrieval. The paper then introduces the general concept of Sensitive Regions (or ‘hot-spots’) by reverse engineering techniques from established technologies such as computer graphics and traditional cinematic animation. In the final sections, the paper describes three applications being developed at the Computer Graphics Center (ZGDV) which explore a variety of aspects associated with Sensitive Regions: the HyperPicture-System focuses on the management of such data, MOVie experiments with the creation and editing of Sensitive Regions in a cinematically oriented context, while ShareME explores some of the issues associated with the use of Sensitive Regions in the interface to multimedia applications.

68 citations


Journal ArticleDOI
TL;DR: The use of viewpoint specifications is discussed, a technique which concentrates on making large specifications more understandable, using several self-contained partial specifications which may then be amalgamated to give a description of the complete system.
Abstract: In the paper we discuss the use of viewpoint specifications, a technique which concentrates on making large specifications more understandable. Rather than specifying the whole system at once, a system is described using several self-contained partial specifications, which may then be amalgamated to give a description of the complete system. Amalgamation is taken to be a composite process in which the data and operations of the constituent viewpoints are separately considered. The approach is illustrated in terms of Z specifications.

47 citations


Journal ArticleDOI
TL;DR: An algorithm to create a reachability tree for statecharts is presented and how to use this tree to analyse dynamic properties of state charts; reachability from any state configuration, usage of transitions, reinitiability, deadlocks, and valid sequence of events.
Abstract: Statecharts are an extension to finite state machines with capability for expressing hierarchical decomposition and parallelism. They also have a mechanism called history, to remember the last visit to a superstate. An algorithm to create a reachability tree for statecharts is presented. Also shown is how to use this tree to analyse dynamic properties of statecharts; reachability from any state configuration, usage of transitions, reinitiability, deadlocks, and valid sequence of events. Owing to its powerful notation, building a reachability tree for statecharts presents some difficulties, and we show how these problems were solved in the tree we propose.

36 citations


Journal ArticleDOI
TL;DR: The proposal provides a nested object-relationship-operation framework which is based on a consistent use of the object model, a strong notion of encapsulation and a clear distinction between the essential and the contingent properties of the modelled entities.
Abstract: This paper evaluates current object-oriented analysis methods and proposes some possible improvements. We start by describing software requirements definition activities and reviewing the concepts of data modeling and object-orientation. We then identify key principles of object-oriented analysis and judge a selection of emerging methods against these principles. Finally, a proposal is presented which attempts to avoid some of the identified shortcomings. The proposal provides a nested object-relationship-operation framework which is based on a consistent use of the object model, a strong notion of encapsulation and a clear distinction between the essential and the contingent properties of the modelled entities.

35 citations


Journal ArticleDOI
TL;DR: The paper outlines the role of formal methods in the practical production of parallel software, but its main focus is the emergence of development methodologies and environments.
Abstract: Current approaches to software engineering practice for parallel systems are reviewed. The parallel software designer has not only to address the issues involved in the characterization of the application domain and the underlying hardware platform, but, in many instances, the production of portable, scalable software is desirable. In order to accommodate these requirements, a number of specific techniques and tools have been proposed, and these are discussed in this review in the framework of the parallel software life-cycle. The paper outlines the role of formal methods in the practical production of parallel software, but its main focus is the emergence of development methodologies and environments. These include CASE tools and run-time support systems, as well as the use of methods taken from experience of conventional software development. Because of the particular emphasis on performance of parallel systems, work on performance evaluation and monitoring systems is considered.

31 citations


Journal ArticleDOI
TL;DR: This paper demonstrates that the endeavour to extend the principles of object-orientation into an all-encompassing method is ill-considered, and that an expansive view of methods integration is more likely to prove conducive in harnessing the strengths ofobject- Orientation to alternative approaches to systems development.
Abstract: The principles of object-orientation have been widely used in the area of programming for some time. Their application to the design and specification of software has proved beneficial in a number of ways. In recent years there have been numerous attempts to exploit the same principles in high level design and the analysis of systems requirements. In many cases this has been achieved by extending the principles of object-oriented programming to the earlier phases of analysis and design without recognition that activities and objectives in the different domains may be dissimilar. This paper demonstrates that the endeavour to extend the principles of object-orientation into an all-encompassing method is ill-considered, and that an expansive view of methods integration is more likely to prove conducive in harnessing the strengths of object-orientation to alternative approaches to systems development.

23 citations


Journal ArticleDOI
TL;DR: A set of techniques, called delta cost estimation, where estimates are made by comparing a proposed project with its differences from a baseline, or usual, project, are described.
Abstract: Software reuse is seen as a way to leverage commonality and reduce cost. But existing software cost models, based only on a single project, are ineffective in estimating the impact of reuse over multiple, related projects. This paper describes a set of techniques, called delta cost estimation, where estimates are made by comparing a proposed project with its differences from a baseline, or usual, project. The estimation techniques are placed in the context of an overall reuse cost cycle, where a cost-sharing bank makes loans to a producer of reusable components and then recoups the loan in amortization charges to those who reuse the resulting products.

22 citations


Journal ArticleDOI
TL;DR: The requirements of a new measurement method are proposed and emphasized, which emphasizes increased application of measurement within context of software development processes.
Abstract: Measurement is a much advocated, yet infrequently applied technique of software engineering. A major contributory factor to this state of affairs is that the majority of software metrics are developed, collected and applied in a haphazard fashion. The result is metrics that frequently are poorly formulated, inappropriate to the specific needs and environment of the using organization and hard to analyse once collected. Over recent years a number of measurement methods—frameworks for developing and applying metrics—have been proposed and used to rectify this state of affairs. This paper describes and evaluates these various methods, identifies strengths and weaknesses leading to an agenda of further work. The requirements of a new measurement method are proposed and emphasizes increased application of measurement within context of software development processes.

20 citations


Journal ArticleDOI
TL;DR: EVA is a multimedia database system capable of storage, retrieval, management, analysis and delivery of objects of various media types, including text, audio, images and moving pictures, and formally defined in an algebraic framework.
Abstract: EVA is a multimedia database system capable of storage, retrieval, management, analysis and delivery of objects of various media types, including text, audio, images and moving pictures. The interface language deals with the temporal and spatial aspects of multimedia information retrieval and delivery, in addition to the usual capabilities provided by the ordinary database languages. EVA has five classes of operations, namely: operations for querying and updating (i.e. editing) the multimedia information, operations for screen management, temporal operators, operators for specifying rules and constraints, and aggregation (computational) operators. EVA, an extension of the query language Varqa, is a functional language whose notation is based on that of conventional set theory. It is formally defined in an algebraic framework. EVA is object oriented and supports objects, object classes, attributes and methods of objects, and relationships between objects. The current implementation of EVA runs on several different platforms.

Journal ArticleDOI
TL;DR: The Global System Model is presented and its benefits to the software maintenance process are discussed and the associated issues and characteristics of the model are also examined.
Abstract: Maintenance of large information systems typically suffers from a failure to fully understand both the software and the broader context in which it operates. In particular, effective maintenance requires knowledge of not only the source code, but also user documentation, system design and the business goals and objectives the system aims to support. The Esprit docket project seeks to develop an integrated set of tools, and supporting method of use, to enable software maintainers to gain a richer understanding of a software system and its components. This is achieved by enhancing traditional, reverse engineering tools with other sources of knowledge, such as test cases, documentation and user expertise, in order to provide a more comprehensive picture of a system. Underpinning the docket toolset is an organizational meta-model called the Global System Model. This model seeks to represent all aspects of an information system, necessary for system understanding and software maintenance, at different levels of abstraction. This paper presents the Global System Model and discusses its benefits to the software maintenance process. The associated issues and characteristics of the model are also examined.

Journal ArticleDOI
TL;DR: The aim of this work is to demonstrate the flexibility of the architecture in serving the needs of a number of distinct user groups and to demonstrate that the virtual architecture is capable of supporting some of the main hypermedia access methods.
Abstract: This paper discusses an architecture for knowledge-based hypermedia systems based on work from semantic databases Its power derives from its use of a single, uniform data structure which can be used to store both the intensional and extensional information needed to generate hypermedia systems The architecture is also sufficiently powerful to accommodate the representation of reasonable amounts of knowledge within a hypermedia system Work has been conducted in building a number of prototypes on a small information base of digital image data The prototypes serve as demonstrators of systems for managing the large amounts of information held by museums on their artefacts The aim of this work is to demonstrate the flexibility of the architecture in serving the needs of a number of distinct user groups To this end, the first prototype has demonstrated that the virtual architecture is capable of supporting some of the main hypermedia access methods The current demonstrator is being used to investigate the potential of the approach for handling multiple classifications of hypermedia material The research is particularly directed at the incorporation of evolving temporal and spatial knowledge

Journal ArticleDOI
TL;DR: An innovative net-based methodology to integrate qualitative and quantitative analysis of distributed software systems is outlined, and an on-going prototype implementation of a related graphic-oriented tool kit is sketched.
Abstract: An innovative net-based methodology to integrate qualitative and quantitative analysis of distributed software systems is outlined, and an on-going prototype implementation of a related graphic-oriented tool kit is sketched. The proposed method combines qualitative analysis, monitoring and testing as well as quantitative analysis on the basis of a net-based intermediate representation of the distributed software system under consideration. All transformations (from the distributed software system into a first Petri net model, and between the different kinds of net models) can be made formally, and therefore automated to a high degree. The evaluation of quantitative properties is based on so-called object nets which are obtained by a property-preserving structural compression and quantitative expansion of the qualitative model. In this way, the frequency and delay attributes necessary to generate quantitative models are provided by the monitoring and testing component.

Journal ArticleDOI
TL;DR: How a hypermedia link service can be applied to the management of multimedia information and a detailed description of the design of the open hypermedia system Microcosm is given is given to illustrate how a linkservice can be used to provide access to multimedia resources from a range of standard application packages.
Abstract: This paper describes how a hypermedia link service can be applied to the management of multimedia information. The development of open hypermedia systems and link services is described in the context of the development of hypermedia systems generally. A detailed description of the design of the open hypermedia system Microcosm is given. This is used to illustrate how a link service can be used to provide access to multimedia resources from a range of standard application packages.

Journal ArticleDOI
TL;DR: This paper shows how the solution space of the genetic algorithm can be set up in the form of tree structures (forests), and how these are encoded by a simple integer assignation.
Abstract: The focus of this paper is database design using automated database design tools or more general CASE tools. We present a genetic algorithm for the optimization of (internal) database structures, using a multi-criterion objective function. This function expresses conflicting objectives, reflecting the well-known time/space trade-off. This paper shows how the solution space of the algorithm can be set up in the form of tree structures (forests), and how these are encoded by a simple integer assignation. Genetic operators (database transformations) defined in terms of this encoding behave as if they manipulate tree structures. Some basic experimental results produced by a research prototype are presented.

Journal ArticleDOI
TL;DR: A composite model of multimedia courseware development effort is proposed which makes use of a Rayleigh curve and cost drivers, along with initial analysis of cost drivers and delivery time.
Abstract: A recurring problem faced by multimedia courseware developers is the accurate estimation of development effort. This paper outlines a metrics based model for predicting the development effort of multimedia courseware. A composite model of multimedia courseware development effort is proposed which makes use of a Rayleigh curve and cost drivers. Initial analysis of cost drivers and delivery time are described along with future work to develop a rigorous model for multimedia courseware development.

Journal ArticleDOI
TL;DR: A set of functional requirements for a matrix editor for a metaCASE environment is presented, and a user interface for such an editor is suggested, using experience from the design of the Matrix Editor for MetaEdit+, under development in the MetaPHOR project at the University of Jyvaskyla.
Abstract: Research in metaCASE or CASE shells has largely focused on supporting methods by allowing the definition of the concepts and representational symbols used in their diagrams. Little interest has been shown in environments supporting representational paradigms other than diagrams, such as matrices or (hyper-)text. The matrix in particular is often a better format for business information systems, metamodelling, and automatic algorithms for decomposing a system. This paper presents a set of functional requirements for a matrix editor for a metaCASE environment, and suggests a user interface for such an editor, using experience from the design of the Matrix Editor for MetaEdit+, under development in the MetaPHOR project at the University of Jyvaskyla.

Journal ArticleDOI
TL;DR: A methodology for integrating schema translation and data conversion that translates the existence and navigational semantics of the network database into a relational database without loss of information is described.
Abstract: Because of its popularity and user adaptability, many companies have chosen relational databases as their corporate database systems. However, with a huge investment in existing network and hierarchical database applications, there is a need for some companies to convert them into relational databases. Currently the simplest solution for data conversion is to write an application program for each network database file conversion or to develop a language to implement the conversion process. These are inefficient and costly. This paper describes a methodology for integrating schema translation and data conversion. Schema translation involves semantic reconstruction and the mapping of network schema into relational schema. Data conversion involves unloading records of each record type and their relationship relations into sequential files, and uploading them into relational table files. The conversion process can recover the semantics of the network schema by capturing users' knowledge. The methodology preserves the constraints of the network database by mapping the equivalent data dependencies of a loop-free network schema to a relational schema. The translation of navigational semantic from network schema to relational schema implies that each distinct access path of network schema should be considered as a subschema before conversion. The conversion process translates the existence and navigational semantics of the network database into a relational database without loss of information.

Journal ArticleDOI
TL;DR: A new logical data model (the HM Data Model) is presented which incorporates the well-known principles of object-oriented data modelling into the management of large-scale, multi-user hypermedia databases and provides a solid logical basis for semantic modelling.
Abstract: Although the object-oriented paradigm is well suited for modelling self-contained independent objects, it is not suited for modelling persistent relations (static links) between abstract data objects. At the same time, the concept of computer-navigable links is an integral part of the hypermedia paradigm. In contrast to multimedia, where the object-oriented paradigm plays a leading role, this ‘static link’ deficiency considerably reduces the application of object-oriented methods in hypermedia. In this paper, we present a new logical data model (the HM Data Model) which incorporates the well-known principles of object-oriented data modelling into the management of large-scale, multi-user hypermedia databases. The model is based on the notion of abstract hypermedia data objects called S-collections. Computer-navigable links are encapsulated within a particular S-collection and are also bound between S-collections. This approach not only overcomes the static link deficiency of the object-oriented paradigm, but also supports modularity, incremental development, and flexible versioning, and provides a solid logical basis for semantic modelling.

Journal ArticleDOI
TL;DR: The work presented in this paper addresses the problem of handling uncertainty in the HCPRs system based on the ideas of Dempster-Shafer uncertainty calculus and provides a simple, intuitive notion of the precision of an inference which relates it to the amount of information found.
Abstract: This paper addresses the problem of reasoning under time constraints with incomplete and uncertain information. It is based on the idea of Variable Precision Logic (VPL), introduced by Michalski and Winston. The approach taken is to vary the precision of inferences in order to produce the most accurate answer possible within a given time limit. VPL deals with both the problem of reasoning with incomplete information subject to time constraints and the problem of reasoning efficiently with exceptions. It offers mechanisms for handling trade-offs between the precision of inferences and the computational efficiency of deriving them. The two aspects of precision are the certainty of belief and the specificity of conclusion in them. Michalski and Winston suggested the Censored Production Rule (CPR) as an underlying representational and computational mechanism to enable logic-based systems to exhibit variable precision in which certainty varies while specificity stays constant. CPRs are obtained by augmenting ordinary production rules with an exception condition and are written as ‘if A then B unless C’, where C is the exception condition. Such rules are employed in situations in which the conditional statement ‘if A then B' holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception condition, when the resources needed to establish its presence are tight or there simply is no information available as to whether it holds or does not hold. Thus ‘if A then B’ part of the CPR expresses important information while the ‘unless C’ part acts only as a switch that changes the polarity of B to B when C holds. As an extension of CPR, a Hierarchical Censored Production Rules (HCPRs) system of knowledge representation proposed by Bharadwaj and Jain exhibits both variable certainty as well as variable specificity. The work presented in this paper addresses the problem of handling uncertainty in the HCPRs system based on the ideas of Dempster-Shafer uncertainty calculus. The use of Dempster-Shafer Theory to formalize VPL type inference provides a simple, intuitive notion of the precision of an inference which relates it to the amount of information found. This formalism allows the ignorance in the evidence to be preserved through the reasoning process and expressed in the decision. To make the HCPRs system more effective specialized combination and propagation functions for reasoning in taxonomies are developed and thereby two different reasoning schemes for HCPRs system are presented. Examples are given to demonstrate the behaviour of the proposed schemes.

Journal ArticleDOI
TL;DR: This paper introduces a software process language called ‘Concurrent Software Process Language’ (CSPL) to serve as a process-centred environment that is capable of modelling the concurrency in software processes.
Abstract: This paper introduces a software process language called ‘Concurrent Software Process Language’ (CSPL) to serve as a process-centred environment. ‘Concurrent’ implies that the CSPL is capable of modelling the concurrency in software processes. The CSPL syntax inherits mostly from Ada, and the CSPL semantics is in the Unix shell. A CSPL process program is translated into Unix shell to enact (execute) a software process. Examples are given to illustrate how the CSPL provides environmental supports to guide, automate or simulate software processes. A CSPL prototype is currently developed on the Sun workstation.

Journal ArticleDOI
TL;DR: The main issues related to the use of the Generalized Stochastic Petri Nets (GSPN) formalism for computer supported concurrent software comprehension and validation are discussed.
Abstract: The main issues related to the use of the Generalized Stochastic Petri Nets (GSPN) formalism for computer supported concurrent software comprehension and validation are discussed. A GSPN-based approach is currently under investigation in the EPOCA Project, based on the integration of the DIStributed C development system with the GreatSPN tool for editing and analysis of GSPN models.

Journal ArticleDOI
TL;DR: An evaluation and selection approach for CASE software or CASE tools is illustrated that considers the possibility that, in any phase of the process, no single CASE product is suitable and that a combination of products must be utilized.
Abstract: This paper illustrates an evaluation and selection approach for CASE software or CASE tools. The approach incorporates three phases: (1) CASE software screening; (2) CASE tool evaluation; and (3) confirmation of the final CASE software selection. Initially, developing a short list through screening of commercial CASE products determines whether appropriate tools exist and narrows the field of available CASE software products for detailed consideration. The second phase determines which of the remaining products (the finalists) best meets the needs of the organization, from both functional and technical perspectives. The final phase compares user requirements with the features of the selected CASE software by defining how these requirements will be satisfied by building applications with the selected product. The approach also considers the possibility that, in any phase of the process, no single CASE product is suitable and that a combination of products must be utilized.

Journal ArticleDOI
TL;DR: The adequacy of the models in addressing the problems encountered in the projects is assessed, and a broader model of information system development is proposed, encompassing strategic planning, organizational learning and the reconciliation of alternative viewpoints.
Abstract: Many current models of information system originated before the widespread use of Computer Aided System Engineering (CASE) tools. The aim of CASE is to automate part of the development process, thus increasing productivity and quality. In this paper, we assess the validity of four significant models of system development against the experiences of three CASE-based development projects. A valid model is a necessary but not sufficient basis for process improvement. The adequacy of the models in addressing the problems encountered in the projects is assessed, and a broader model of information system development is proposed, encompassing strategic planning, organizational learning and the reconciliation of alternative viewpoints.

Journal ArticleDOI
TL;DR: Development of a method for analysis and design of multimedia presentation interfaces is described, which investigates task based information analysis, persistence of information, selection attention and concurrency in presentation.
Abstract: Multimedia interfaces are currently created primarily by intuition. Development of a method for analysis and design of multimedia presentation interfaces is described. The study investigates task based information analysis, persistence of information, selection attention and concurrency in presentation. The method gives an agenda of issues, diagrams and techniques for specification, with guidelines for media selection and presentation scripting. Use of the method is illustrated with an example interface from a shipboard emergency management system.

Journal ArticleDOI
TL;DR: The goal of this paper is to present ANDES, a language for modelling quantitative behaviour of parallel programs, employed in the ALPES environment, and a parallel synthetic program is generated.
Abstract: The goal of this paper is to present ANDES, a language for modelling quantitative behaviour of parallel programs. ANDES is employed in the ALPES environment. From an ANDES quantitative model, a parallel synthetic program is generated. This synthetic program is executed on a real distributed memory parallel computer. Some performance indices are obtained from this execution. Currently, mapping strategies are being evaluated using this approach. The ALPES environment is a research domain in the APACHE project.

Journal ArticleDOI
Ivan Aaen1
TL;DR: Using principal component analysis the paper illustrates how demographic factors, objectives for acquiring CASE, selection criteria, introduction activities, and the internal diffusion of tools associate with tool related and organizational problems.
Abstract: The paper deals with CASE introduction problems experienced in user organizations based on data from a combined Danish and Finnish questionnaire study. Using principal component analysis the paper illustrates how demographic factors, objectives for acquiring CASE, selection criteria, introduction activities, and the internal diffusion of tools associate with tool related and organizational problems.

Journal ArticleDOI
TL;DR: A pragmatic definition of incompleteness is introduced: a classification based on its potential sources that observes that completeness, though needed to properly reason about, and capture the behaviour of, the system, is undesirable in some cases.
Abstract: Completeness is usually listed as a desirable attribute of specifications; incompleteness, as a reason for the failure of software to satisfy its intended requirements. Unfortunately, these terms are rarely given anything but intuitive definitions, making it unclear how to achieve the former, or alternatively, avoid the latter. This article examines the notion of (in)completeness in specifications from a number of perspectives, and then introduces a pragmatic definition of incompleteness: a classification based on its potential sources. From this, it observes that completeness, though needed to properly reason about, and capture the behaviour of, the system, is undesirable in some cases.

Journal ArticleDOI
TL;DR: A new programming methodology and language is proposed, integrating textual and graphical descriptions, which is much better suited for parallel programming, and can be performed in one single formalism suitable for all phases.
Abstract: A programming language which supports the adequate specification of both parallel and sequential aspects of a program establishes the optimal basis for a parallel programming methodology. Parallel languages that are entirely based on textual representations are not the best choice for describing paralellism. The main drawbacks stem from the fact that the sequential order of textual representations hides the parallel structure of a program. We propose a new programming methodology and language (called Meander), integrating textual and graphical descriptions, which is much better suited for parallel programming. Parallel aspects are formulated by means of a specification graph which is annotated by sequential code. Program design, coding and visualization can then be performed in one single formalism which is suitable for all phases.