scispace - formally typeset
Search or ask a question

Showing papers on "Software portability published in 1988"


Journal ArticleDOI
TL;DR: The TAME system is an instantiation of the TAME software engineering process model as an ISEE (integrated software engineering environment) and the first in a series of Tame system prototypes has been developed.
Abstract: Experience from a dozen years of analyzing software engineering processes and products is summarized as a set of software engineering and measurement principles that argue for software engineering process models that integrate sound planning and analysis into the construction process. In the TAME (Tailoring A Measurement Environment) project at the University of Maryland, such an improvement-oriented software engineering process model was developed that uses the goal/question/metric paradigm to integrate the constructive and analytic aspects of software development. The model provides a mechanism for formalizing the characterization and planning tasks, controlling and improving projects based on quantitative analysis, learning in a deeper and more systematic way about the software process and product, and feeding the appropriate experience back into the current and future projects. The TAME system is an instantiation of the TAME software engineering process model as an ISEE (integrated software engineering environment). The first in a series of TAME system prototypes has been developed. An assessment of experience with this first limited prototype is presented including a reassessment of its initial architecture. >

1,351 citations


Journal ArticleDOI
TL;DR: This paper describes the design and implementation of virtual memory management within the CMU Mach Operating System and the experiences gained by the Mach kernel group in porting that system to a serverless system.
Abstract: The authors describe the design, implementation, and evaluation of the Mach virtual-memory management system. The Mach virtual-memory system exhibits architecture independence, multiprocessor and d...

104 citations


Journal ArticleDOI
TL;DR: GEMPACK as discussed by the authors is a software package developed specifically to reduce dramatically the research time, effort and cost required to set up one solution method (the Johansen method) on an actual computer.

85 citations


Journal ArticleDOI
TL;DR: This framework for developing expert systems for statistical process control applications is partitioned into three sets: domain-independent, analysis rules, which determne whether or not the sample observations indicate a lack of control; intrpretive rules,Which analyze the patterns in the chart in terms of process changes.

53 citations


Proceedings ArticleDOI
05 Oct 1988
TL;DR: An automated library system based on faceted classification that is being prototyped and some management issues related to the establishment of a reusability program within an organization and the effective use of a software component library are discussed.
Abstract: The authors present material on software classification, survey existing techniques, describe research on faceted classification, contrast the different approaches, and describe the process of constructing a faceted classification scheme. They describe an automated library system based on faceted classification that is being prototyped. Some management issues related to the establishment of a reusability program within an organization and the effective use of a software component library, are discussed. >

38 citations


Journal ArticleDOI
TL;DR: The version presented here operates correctly on a large number of different floating-point systems, including those implementing the new IEEE Floating-Point Standard.
Abstract: Numerical software written in high-level languages often relies on machine-dependent parameters to improve portability. MACHAR is an evolving FORTRAN subroutine for dynamically determining thirteen fundamental parameters associated with a floating-point arithmetic system. The version presented here operates correctly on a large number of different floating-point systems, including those implementing the new IEEE Floating-Point Standard.

37 citations


Proceedings ArticleDOI
03 Nov 1988
TL;DR: A five-year effort under the Ada Joint Program Office has developed a proposed standard for a host system interface as seen by tools running in an Ada Programming Support Environment (APSE).
Abstract: A five-year effort under the Ada Joint Program Office has developed a proposed standard for a host system interface as seen by tools running in an Ada Programming Support Environment (APSE). Standardization of this interface as DOD-STD-1838A will have a number of desirable effects for the Department of Defense, including tool portability, tool integration, data transportability, encouragement of a market in portable tools, and better programmer productivity.As the capability of tools to communicate with each other is a central requirement in APSEs, the Common APSE Interface Set (CAIS) has paid particular attention to facilitate such communication in a host-independent fashion. CAIS incorporates a well-integrated set of concepts tuned to the needs of writers and users of integrated tool sets.This paper covers several of these concepts: • the entity management system used in place of a traditional filing system,• object typing with inheritance,• process control including atomic transactions,• access control and security,• input/output methods,• support for distributed resource control, and• facilities for inter-system data transport.

29 citations


Proceedings ArticleDOI
09 Feb 1988
TL;DR: SPQR (Selectional Pattern Queries and Responses), a module of the PUNDIT text-processing system designed to facilitate the acquisition of domain-specific semantic information, and to improve the accuracy and efficiency of the parser, is presented.
Abstract: This paper presents SPQR (Selectional Pattern Queries and Responses), a module of the PUNDIT text-processing system designed to facilitate the acquisition of domain-specific semantic information, and to improve the accuracy and efficiency of the parser. SPQR operates by interactively and incrementally collecting information about the semantic acceptability of certain lexical co-occurrence patterns (e.g., subject-verb-object) found in partially constructed parses. The module has proved to be a valuable tool for porting PUNDIT to new domains and acquiring essential semantic information about the domains. Preliminary results also indicate that SPQR causes a threefold reduction in the number of parses found, and about a 40% reduction in total parsing time.

29 citations


Journal ArticleDOI
01 Oct 1988
TL;DR: The SCHEDULE package as discussed by the authors provides an environment for developing and analyzing explicitly parallel programs in FORTRAN which are portable, including a preprocessor to achieve complete portability of user level code and a graphics post processor for performance analysis and debugging.
Abstract: This paper will describe some recent attempts to construct transportable numerical software for high-performance computers. Restructuring algorithms in terms of simple linear algebra modules is reviewed. This technique has proved very succesful in obtaining a high level of transportability without severe loss of performance on a wide variety of both vector and parallel computers. The use of modules to encapsulate parallelism and reduce the ratio of data movement to floating-point operations has been demonstrably effective for regular problems such as those found in dense linear algebra. In other situations it may be necessary to express explicitly parallel algorithms. We also present a programming methodology that is useful for constructing new parallel algorithms which require sophisticated synchronization at a large grain level. We describe the SCHEDULE package which provides an environment for developing and analyzing explicitly parallel programs in FORTRAN which are portable. This package now includes a preprocessor to achieve complete portability of user level code and also a graphics post processor for performance analysis and debugging. We discuss details of porting both the SCHEDULE package and user code. Examples from linear algebra, and partial differential equations are used to illustrate the utility of this approach.

18 citations


Proceedings ArticleDOI
10 Oct 1988
TL;DR: Results show that the interpreted execution speed of image algebra code compares favorably with hand-coded versions of similar algorithms and that the implementation can be easily modified to add further functionality.
Abstract: The implementation and use of a machine-independent image-processing language, the AFATL (Air Force armament laboratory) image algebra, on a massively parallel machine is presented. The authors introduce the problem of specifying image-processing algorithms in a machine-dependent way, introduce the image algebra, provide an overview of how image algebra constructs are implemented in Connection Machine *lisp and provide examples of the use of image algebra for a variety of image-processing operations. The generality, level of portability, and efficiency of the existing implementation are discussed. Results show that the interpreted execution speed of image algebra code compares favorably with hand-coded versions of similar algorithms and that the implementation can be easily modified to add further functionality. >

17 citations


Book ChapterDOI
01 Jan 1988
TL;DR: SDEF is intended to provide 1) systolic algorithm researchers/developers with an executable notation, and 2) the software systems community with a target notation for the development of higher level syStolic software tools.
Abstract: SDEF, a systolic array programming system, is presented. It is intended to provide 1) systolic algorithm researchers/developers with an executable notation, and 2) the software systems community with a target notation for the development of higher level systolic software tools. The design issues associated with such a programming system are identified. A spacetime representation of systolic computations is described briefly in order to motivate SDEF’s program notation. The programming system treats a special class of systolic computations, called atomic systolic computations, any one of which can be specified as a set of properties: the computation’s 1) index set (S), 2) domain dependencies (D), 3) spacetime embedding (E), and nodal function (F). These properties are defined and illustrated. SDEF’s user interface is presented. It comprises an editor, a translator, a domain type database, and a systolic array simulator used to test SDEF programs. The system currently runs on a Sun 3/50 operating under Unix and Xwindows. Key design choices affecting this implementation are described. SDEF is designed for portability. The problem of porting it to a Transputer array is discussed.

Proceedings ArticleDOI
Kurt Geihs1, B. Schoener1, Ulf Hollberg1, Hermann Schmutz1, Herbert Eberle1 
11 Apr 1988
TL;DR: The prototype implementation of DACNOS demonstrates that it is feasible to add powerful and flexible means for distributed cooperation to an operating system without affecting its existing individual interfaces and applications.
Abstract: The DAC Network Operating System (DACNOS) was designed to support resource sharing in a world of interconnected heterogeneous computing systems The prototype implementation demonstrates that it is feasible to add powerful and flexible means for distributed cooperation to an operating system without affecting its existing individual interfaces and applications The authors describe the important design issues of DACNOS and their experiences with the implementation and performance of a prototype Particular emphasis is put on the portability of the NOS software and on the design of the interface to the NOS kernel that provides the facilities for distributed cooperation It is shown by an example how this set of facilities eases the implementation of distributed applications by taking most of the burden of distribution, access protection, resource management and data representation away from the programmer >

Proceedings ArticleDOI
01 Jan 1988
TL;DR: The author explores architectural alternatives for the data repository and uses the ANSI/SPARC taxonomy to characterize the external, conceptual and internal model of the Software BackPlane's object-oriented data management facilities.
Abstract: The Software BackPlane is an integration and portability platform that facilitates software tool integration, portability, workflow control, configuration management and shared access to project information. The Software BackPlane provides a consistent user environment based on the X Window standard, portability services using a generic operating system interface, and structured data management, using a multilayered data repository. The author explores architectural alternatives for the data repository. The ANSI/SPARC taxonomy is used to characterize the external, conceptual and internal model of the Software BackPlane's object-oriented data management facilities. >

Book
01 Oct 1988

Book ChapterDOI
R. Staroste1, Hermann Schmutz1, M. Wasmund1, Alexander Schill1, W. Stoll1 
01 Jan 1988
TL;DR: A portability environment has to support methods of structuring communication software in multiple threads and has to provide access to lower level communication services in a uniform, guest system independent way and a method is introduced, which integrates multiple networks into a global net.
Abstract: This paper describes a portability environment, which has been developed for a portable network operating system. The environment has to support methods of structuring communication software in multiple threads and has to provide access to lower level communication services in a uniform, guest system independent way. In order to save the users investments in communication-equipment and -software a method is introduced, which integrates multiple networks into a global net. The requirements of a portability environment are analyzed and the developed design concepts are derived. Furthermore alternatives of its implementation in guest systems are discussed. Finally experiences are presented, which have been gained during its portation from the development system to other guest systems.

Journal ArticleDOI
Les Hatton1, Andy Wright1, Stuart Smith1, G.E. Parkes1, Paddy Bennett1 
TL;DR: The design and successful implementation of a 500,000+ line portable FORTRAN 77 package for the processing of seismic data is described, which exhibits demonstrably high efficiency on a wide variety of machines from minicomputers to the largest supercomputers.
Abstract: The portability of software has become a major commercial issue in recent times. Such portability does not come easily, as few if any computer languages are really portable in practice. An additional complicating factor, especially in the commercial environment, is that the resulting software must be efficient. This paper describes the design and successful implementation of a 500,000+ line portable FORTRAN 77 package for the processing of seismic data. The package exhibits demonstrably high efficiency on a wide variety of machines from minicomputers to the largest supercomputers. Experiences gained during this exercise throw much light on the integration of the various thought processes which occur during the software engineering cycle, especially the notion of locality.

Journal ArticleDOI
TL;DR: The authors summarize the capabilities of the current release of LAS (version 4.0) and discuss plans for future development, with particular emphasis on the issue of system portability and the importance of removing and/or isolating hardware and software dependencies.
Abstract: The Land Analysis System (LAS) is an interactive software system available in the public domain for the analysis, display, and management of multispectral and other digital image data. LAS provides over 240 applications functions and utilities, a flexible user interface, complete online and hard-copy documentation, extensive image-data file management, reformatting, conversion utilities, and high-level device independent access to image display hardware. The authors summarize the capabilities of the current release of LAS (version 4.0) and discuss plans for future development. Particular emphasis is given to the issue of system portability and the importance of removing and/or isolating hardware and software dependencies. >

Patent
04 Nov 1988
TL;DR: In this article, the authors present a system and method for providing application program portability and consistency across a number of different hardware, database, transaction processing and operating system environments, including a plurality of processes for performing one or more tasks required by the application software.
Abstract: VIRTUAL INTERFACE SYSTEM AND METHOD FOR ENABLING SOFTWARE: APPLICATIONS TO BE ENVIRONMENT INDEPENDENT ABSTRACT OF THE DISCLOSURE A system and method for providing application program portability and consistency across a number of different hardware, database, transaction processing and operating system environments. In the preferred embodiment, the system includes a plurality of processes for performing one or more tasks required by the application software in one or more distributed processors of a heterogenous or "target" computer. In a run-time mode, program code of the application software is pre-processed, compiled and linked with system interface modules to create code executable by a operating system of the target computer. The executable code, which includes a number of functional calls to the processes, is run by the operating system to enable the processes to perform the tasks required by the application software. Communications to and from the processes are routed by a blackboard switch logic through a partitioned storage area or "blackboard".

Journal ArticleDOI
01 Mar 1988
TL;DR: The package, which implements the event view, has procedures and functions that allow discrete event simulation programs to be easily developed in Pascal, including the ability to apply top-down design, self-documentation, portability and the fact that the only development software required is the Pascal compiler.
Abstract: SIMTOOLS is a collection of procedures and functions that allow discrete event simulation programs to be easily developed in Pascal. The package, which implements the event view, has pro cedures for creating and deleting entities, managing lists or queues, event scheduling and sequencing, system tracing and data collection. It is useful for any model-building effort, but especially those which do not fall neatly into the class of queu ing networks where specialized simulation software is available. The advantages of such an approach include the ability to apply top-down design, self-documentation, portability and the fact that the only development software required is the Pascal compiler.

Journal ArticleDOI
TL;DR: This paper describes a low‐level vision software system, developed in the context of current collaborative research activities in image understanding, which goes some way toward fulfilling the goals of portability, ease of use, and general‐purpose extensibility.
Abstract: Image understanding is concerned with the elucidation of a computational base inherent in perceiving a three-dimensional world using vision. This paper describes a low-level (or early) vision software system, developed in the context of current collaborative research activities in image understanding, which goes some way toward fulfilling the goals of portability, ease of use, and general-purpose extensibility. Since visual perception uses several types of disparate, but interrelated, information in some explicit cognitive organization, a central objective of the work is to represent this information in a coherent integrated manner which allows one interactively to investigate the properties of the interdependency between information types.

Journal ArticleDOI
D. E. Wolford1
TL;DR: The Common Programming Interface addresses the application development requirement for portability of applications and programmer skills, and addresses the requirements for access to host data through intelligent workstations and for transparent access to remote data and applications.
Abstract: The Common Programming Interface (CPI), one of the four key elements of Systems Application Architecture, comprises a growing set of programming languages and services. The CPI indirectly offers end-user access through the Common User Access by providing the application developer with the necessary interfaces. The CPI addresses the application development requirement for portability of applications and programmer skills. As the CPI continues to expand, it addresses the requirements for access to host data through intelligent workstations and for transparent access to remote data and applications.

Book
01 Jan 1988

Journal ArticleDOI
TL;DR: A software package is described that collects and reduces eye behavior (eye position and pupil size) data using an IBM-compatible personal computer and includes data reduction algorithms and data structures.
Abstract: A software package is described that collects and reduces eye behavior (eye position and pupil size) data using an IBM-compatible personal computer. Written in the C language for speed and portability, the package includes several unique features: data can be collected simultaneously from other sources (e.g., EEG, EMG), logically defined events can be detected in real time on any data channel, and either of two types of data matrix can be produced. Data reduction algorithms and data structures are described.

Proceedings ArticleDOI
24 Apr 1988
TL;DR: An approach is described for design and development of robotic system controllers that can provide more expansibility, flexibility, and portability in a robotic system than conventional multiprocessor methods.
Abstract: An approach is described for design and development of robotic system controllers. The technique, referred to as multiprocessing control, can provide more expansibility, flexibility, and portability in a robotic system than conventional multiprocessor methods. The control algorithms for a Mitsubishi robot are implemented as a multiprocessing system on a VME-bus-based computer running the PDOS operating system. The control software is written entirely in a real-time C and consists of concurrent tasks synchronized with events, semaphores, and messages. A proposal for implementing the technique as a hardware-independent software on the Inmos Transputer is given. >

Proceedings ArticleDOI
M. Frame1
24 Oct 1988
TL;DR: The author introduces a simple tool to alleviate the problem of detecting substantive differences in software versions, modified to work with a lexical analyzer so that it reports only differences in the text that might affect code generation.
Abstract: The author introduces a simple tool to alleviate the problem of detecting substantive differences in software versions. It is a typical line-by-line file-comparison program, modified to work with a lexical analyzer so that it reports only differences in the text that might affect code generation. The report of differences relates back to the original source files, so that the programmer does not have to map the code differences back to the files he is accustomed to using. >

Proceedings ArticleDOI
01 Jan 1988
TL;DR: The CrOS communication package for parallel machines, the CUBIX system to allow a code to run in parallel or sequentially, the PLOTIX parallel graphics foundation, and the parallel debugger NDB are described.
Abstract: We describe a set of software utilities designed to facilitate the writing of parallel codes and porting sequential ones. Emphasis is placed on portability so that code can be developed simultaneously on a sequential and a parallel machine, and so that the completed code can be run and maintained on a wide variety of machine architectures.We describe the CrOS communication package for parallel machines, the CUBIX system to allow a code to run in parallel or sequentially, the PLOTIX parallel graphics foundation, and the parallel debugger NDB.While the system described has been implemented on qualitatively different machines the particular version described here is most efficient for the hypercube architecture, and was developed on NCUBE hypercubes under both the AXIS or XENIX operating systems.


Proceedings ArticleDOI
10 Oct 1988
TL;DR: The development of algorithms which can be ported among different fine-grain, massively parallel architectures and yield reasonably good implementations on each is discussed, and sample algorithms are given to solve some fundamental geometric problems.
Abstract: The development of algorithms which can be ported among different fine-grain, massively parallel architectures and yield reasonably good implementations on each is discussed. The approach is to write algorithms in terms of general data movement operations and then implement the data movement operations on the target architecture. Efficient implementation of the data movement operations requires careful programming, but since the data movement operations form the foundation of many programs, the cost of implementing them can be amortized. The use of data movement operations also helps programmers think in terms of higher-level programming units, in the same way that the use of standard data structures helps programmers of serial computers. An approach is described for designing efficient, portable algorithms, and sample algorithms are given to solve some fundamental geometric problems. The difficulties of portability and efficiency for these geometric problems are redirected into similar difficulties for the standardization operations. >

Journal Article
01 Jan 1988-Scopus
TL;DR: In this paper, the authors present an approach for designing efficient, portable algorithms, and sample algorithms are given to solve some fundamental geometric problems, where the difficulties of portability and efficiency for these geometric problems are redirected into similar difficulties for the standardization operations.
Abstract: The development of algorithms which can be ported among different fine-grain, massively parallel architectures and yield reasonably good implementations on each is discussed. The approach is to write algorithms in terms of general data movement operations and then implement the data movement operations on the target architecture. Efficient implementation of the data movement operations requires careful programming, but since the data movement operations form the foundation of many programs, the cost of implementing them can be amortized. The use of data movement operations also helps programmers think in terms of higher-level programming units, in the same way that the use of standard data structures helps programmers of serial computers. An approach is described for designing efficient, portable algorithms, and sample algorithms are given to solve some fundamental geometric problems. The difficulties of portability and efficiency for these geometric problems are redirected into similar difficulties for the standardization operations. >

Journal ArticleDOI
01 Jul 1988
TL;DR: Two contrasting types of processor, Mil-DAP and Transputer arrays, are evaluated to test their versatility and to compare their performance on algorithms related to military applications, finding that both are extremely versatile and VLSIcompatible.
Abstract: Parallelism has found its way into programmable processors as well as dedicated engines such as FFT and digital filters. However, choices of machine architecture are still open. We have evaluated two contrasting types to test their versatility and to compare their performance on algorithms related to military applications. Fine-grain SIMD, and coarse-grain MIMD machines (Mil-DAP and Transputer arrays) have been applied to a spectrum of problems including FFT, two-dimensional operators, associative processing, linear assignment, sorting, dynamic programming and ray tracing. These relate to military needs in spectrum analysis and image correlation, feature extraction from images, ESM, tracking with netted radars, speech recognition and terrain intervisibility. Possibilities for parallelism in combat simulators are also being examined. Each type of processor has been proved versatile and much more powerful than conventional sequential machines. DAP has the advantage on regular low precision algorithms, and on assignment and sorting operations where scatter, gather and shift operations are important. Transputer arrays will (when the T800 version is available) offer a better capability for floating point arithmetic and less regular tasks. The chief conclusion is that both architectures are extremely versatile and VLSIcompatible, and that choices between them will more often hinge on the cost of a minimal system, the quality of the development software and the portability of code, rather than on the fundamental properties of the machine topology.