scispace - formally typeset
Search or ask a question

Showing papers on "Software portability published in 1993"


01 Sep 1993
TL;DR: The Generic Security Service Application Program Interface (GSS-API) as discussed by the authors provides security services to callers in a generic fashion, supportable with a range of underlying mechanisms and technologies and hence allowing source-level portability of applications to different environments.
Abstract: This Generic Security Service Application Program Interface (GSS-API) definition provides security services to callers in a generic fashion, supportable with a range of underlying mechanisms and technologies and hence allowing source-level portability of applications to different environments. This specification defines GSS-API services and primitives at a level independent of underlying mechanism and programming language environment, and is to be complemented by other, related specifications:

179 citations


Book ChapterDOI
01 Jan 1993
TL;DR: Texas is a persistent storage system for C++, providing high performance while emphasizing simplicity, modularity and portability, which exploits existing virtual memory features to implement large address spaces efficiently on stock hardware.
Abstract: Texas is a persistent storage system for C++, providing high performance while emphasizing simplicity, modularity and portability. A key component of the design is the use of pointer swizzling at page fault time, which exploits existing virtual memory features to implement large address spaces efficiently on stock hardware, with little or no change to existing compilers. Long pointers are used to implement an enormous address space, but are transparently converted to the hardware-supported pointer format when pages are loaded into virtual memory.

171 citations


Journal ArticleDOI
TL;DR: An architectural framework that allows software applications and operating system code written for a given instruction set to migrate to different, higher performance architectures is described, and is designed to accommodate program exceptions, self-modifying code, tracing, and debugging.
Abstract: An architectural framework that allows software applications and operating system code written for a given instruction set to migrate to different, higher performance architectures is described. The framework provides a hardware mechanism that enhances application performance while keeping the same program behavior from a user perspective. The framework is designed to accommodate program exceptions, self-modifying code, tracing, and debugging. Examples are given for IBM System/390 operating-system code and AIX utilities, showing the performance potential of the scheme using a very long instruction word (VLIW) machine as the high-performance target architecture. >

134 citations


02 Jan 1993
TL;DR: This thesis establishes a unifying framework for designing memory models that can adequately satisfy the 3P criteria and applies debugging techniques for sequential consistency to two of the SCNF models to alleviate the problem of debugging programs onSCNF models.
Abstract: The memory consistency model (or memory model) of a shared-memory multiprocessor system influences the performance and the programmability of the system. The most intuitive model for programmers, sequential consistency, restricts many performance-enhancing optimizations. For higher performance, several alternative models have been proposed. The hardware-centric nature of these models, however, makes them difficult to program and inhibits portability. We use a 3P criteria of programmability, portability, and performance to assess memory models, and find current models lacking. This thesis establishes a unifying framework for designing memory models that can adequately satisfy the 3P criteria. The first contribution of this thesis is a programmer-centric methodology, called sequential consistency normal form (SCNF), for specifying memory models. SCNF is based on the observation that a system can employ performance-enhancing optimizations without violating sequential consistency if the system has some information about the program. An SCNF model is a contract between the system and the programmer, where the system guarantees high performance and sequential consistency only if the programmer provides certain information about the program. Insufficient information gives lower performance, but incorrect information violates sequential consistency. SCNF satisfies the 3P criteria of programmability (by providing sequential consistency), portability (by providing a common interface of sequential consistency across all models), and performance (by only requiring sequential consistency for programs with correct information). The second contribution demonstrates the effectiveness of SCNF by applying it to optimizations of previous hardware-centric models, resulting in four SCNF models. Although based on intuition similar to the hardware-centric models, these SCNF models are easier to program, enhance portability, and allow more implementations (with potentially higher performance) than the corresponding hardware-centric models. The third contribution culminates the above work by exposing a large part of the design space of SCNF models. SCNF models are difficult to design because the relationship between system optimizations and programmer information is complex. We simplify this relationship and use it to characterize and explore the design space. The final contribution concerns debugging programs on SCNF models. While debugging, the programmer may unknowingly provide incorrect information, violating sequential consistency. We apply debugging techniques for sequential consistency to two of the SCNF models to alleviate this problem.

110 citations


Steve Vinoski1
01 Jan 1993
TL;DR: The Object Management Group (OMG) was formed in 1989 with the purpose of creating standards allowing for the interoperability and portability of distributed object-oriented (OO) applications.
Abstract: The Object Management Group (OMG) was formed in 1989 with the purpose of creating standards allowing for the interoperability and portability of distributed object-oriented (OO) applications. Unlike the Open Software Foundation (OSF), the OMG does not actually produce software, only specifications. These specifications are created using ideas and technology from OMG members who respond to Requests For Information (RFI) and Requests For Proposals (RFP) issued by the OMG. A strength of this approach is that most of the major players in the commercial distributed OO computing arena are among the several hundred companies that belong to the OMG.

102 citations


Journal ArticleDOI
TL;DR: Object-based graphical user interfaces that may be used as flexible, device independent front-ends for power system simulation and control are discussed and an experimental prototype GUI suitable for energy management systems or operator training simulators is described.
Abstract: Object-based graphical user interfaces (GUIs) that may be used as flexible, device independent front-ends for power system simulation and control are discussed. An experimental prototype GUI suitable for energy management systems or operator training simulators is described. The GUI is based on the X window environment and uses multiple windows to display differing views of the system and direct mouse manipulations to affect the various power system objects in a consistent fashion. An editing portion of the system allows the dynamic construction of one-line diagrams, although an advanced automatic display generation feature capable of constructing layouts based on database information and sophisticated routing and layout heuristics is also provided. The X window (and C) basis of the implementation of the system provides for its relative platform and operating system independence, and allows the networked operation of multiple platforms running the same functions. The aim of this study is to document the generic features and advantages of such a GUI. >

79 citations


Book
02 Jan 1993
TL;DR: A package of linear algebra communication routines for manipulating and communicating data structures that are distributed among the memories of a distributed memory MIMD computer to increase portability, efficiency and modularity at a high level is described.
Abstract: This paper describes a package of linear algebra communication routines for manipulating and communicating data structures that are distributed among the memories of a distributed memory MIMD computer. The motivation for the BLACS is to increase portability, efficiency and modularity at a high level. The audience of the BLACS are mathematical software experts and people with large scale scientific computation to perform.

64 citations


Proceedings ArticleDOI
20 Sep 1993
TL;DR: This work proposes a solution to the problem of parallel programming based on the use of a repertoire of parallel algorithmic forms, known as skeletons, which enables the meaning of a parallel program to be separated from its behaviour.
Abstract: Parallel programming is a difficult task involving many complex issues such as resource allocation, and process coordination. We propose a solution to this problem based on the use of a repertoire of parallel algorithmic forms, known as skeletons. The use of skeletons enables the meaning of a parallel program to be separated from its behaviour. Central to this methodology is the use of transformations and performance models. Transformations provide portability and implementation choices, whilst performance models guide the choices by providing predictions of execution time. We describe the methodology and investigate the use and construction of performance models by studying an example. >

64 citations


Proceedings ArticleDOI
01 Dec 1993
TL;DR: The authors investigate the needs of some massively parallel applications running on distributed-memory parallel computers at Argonne National Laboratory and identify some common parallel I/O operations that hide the details of the actual implementation from the application, while providing good performance.
Abstract: The authors investigate the needs of some massively parallel applications running on distributed-memory parallel computers at Argonne National Laboratory and identify some common parallel I/O operations. For these operations, routines were developed that hide the details of the actual implementation (such as the number of parallel disks) from the application, while providing good performance. An important feature is the ability for the application programmer to specify that a file be accessed either as a high-performance parallel file or as a conventional Unix file, simply by changing the value of a parameter on the file open call. These routines are examples of a parallel I/O abstraction that can enhance development, portability, and performance of I/O operations in applications. Some of the specific issues in their design and implementation in a distributed-memory toolset are discussed.

59 citations



Journal ArticleDOI
TL;DR: Mentat, a dynamic, object-oriented parallel-processing system that provides tools for constructing portable, medium-grain parallel software by combining an object- oriented approach with an underlying layered virtual-machine model is described.
Abstract: Mentat, a dynamic, object-oriented parallel-processing system that provides tools for constructing portable, medium-grain parallel software by combining an object-oriented approach with an underlying layered virtual-machine model is described. Mentat's three primary design objectives-high performance through parallel execution, easy parallelism, and software portability across a wide range of platforms-are reviewed. The performance of four applications of Mentat on two platforms-a 32-node Intel iPSC/2 hypercube and a network of 16 Sun IPC Sparcstations-are examined. The applications are DNA and protein sequence comparison, image convolution, Gaussian elimination and partial pivoting, and sparse matrix-vector multiplication. The performance of Mentat in these applications is compared to that of object-oriented parallel-processing systems, compiler-based distributed-memory systems, portable parallel-processing systems, and hand-coded implementations of the same applications. >

01 Jun 1993
TL;DR: This dissertation covers the construction and management of the VW in NPSNET, a populated, networked, interactive, flexible, three dimensional (3D) virtual world system which uses both standard and non-standard network message formats.
Abstract: : As military budgets shrink, the Department of Defense (DoD) is turning to virtual worlds (VW) to solve problems and address issues that were previously solved by prototype or field exercises. However, there is a critical void of experience in the community on how to build VW systems. The Naval Postgraduate School's Network Vehicle Simulator (NPSNEI) was designed and built to address this need. NPSNET is a populated, networked, interactive, flexible, three dimensional (3D) virtual world system. This dissertation covers the construction and management of the VW in NPSNET. The system, which uses both standard and non-standard network message formats, is fully networked allowing multiple users to interact simultaneously in the VW. Commercial off the shelf (COTS), Silicon Graphics Incorporated (SGI) workstations, hardware was used exclusively in NPSNET to ensure the usefulness and the portability of the system to many DoD commands. The core software architecture presented here is suitable for any VW. Computer graphics, Simulator, Simulation, Networks, Virtual worlds, Artificial reality, Synthetic environments, NPSNET.

Proceedings ArticleDOI
06 Oct 1993
TL;DR: The use of a meta-communication layer, an aggressive data-structure-neutral implementation that minimizes dependence on particular data structures, permitting the library to adapt to the user rather than the other way around, and the separation of implementation language from user-interface language are presented.
Abstract: Designing a scalable and portable numerical library requires consideration of many factors, including choice of parallel communication technology, data structures, and user interfaces. The PETSc library (Portable Extensible Tools for Scientific computing) makes use of modern software technology to provide a flexible and portable implementation. This paper discusses the use of a meta-communication layer (allowing the user to choose different transport layers such as MPI, p4, pvm, or vendor-specific libraries) for portability, an aggressive data-structure-neutral implementation that minimizes dependence on particular data structures (even vectors), permitting the library to adapt to the user rather than the other way around, and the separation of implementation language from user-interface language. Examples are presented. >

Proceedings ArticleDOI
01 Aug 1993
TL;DR: The most prominent recent trend in operating system (OS) design has been the move towards micro-kernel based OS's, but neither monolithic kernels nor micro-kernels have made major progress towards true portability since they do not possess ne-grain modularity.
Abstract: The most prominent recent trend in operating system (OS) design has been the move towards micro-kernel based OS's [2, 8, 9, 12, 14]. Micro-kernel based OS's allow highlevel OS code to be structured as a collection of modules above a minimal kernel. Despite the many advantages of this modular approach, the performance overhead associated with existing modular implementations has proven to be a major liability in the commercial acceptance of these systems [3]. Finding a solution to the con ict between performance and modularity remains a critical research issue of practical importance. Contrary to popular belief, the micro-kernel approach to OS structuring does not lead to major improvements in portability. The modularity exhibited by most micro-kernel designs is coarse grained and orthogonal to the issue of localizing machine-dependent code. Some micro-kernel OS's do de ne various machine-independent interfaces within their micro-kernels, but these are unrelated to the system structuring mechanisms used in the higher layers of their OS code, and make a coarse-grained distinction between machinedependent and portable code [1, 11]. This coarse-grained approach limits portability by limiting the amount of code that can be reused. The state of the art in OS design can be summarized as follows. The vast majority of OS's in active use are monolithic, having traded portability and modularity for performance. There has been considerable research investment in micro-kernel OS's which o er some coarse-grained modularity at the expense of performance, and hence have not received high acceptance commercially. Neither monolithic kernels nor micro-kernels have made major progress towards true portability since they do not possess ne-grain modularity.

Proceedings ArticleDOI
23 May 1993
TL;DR: Ladder logic as the primary programming language for programmable logic controllers (PLCs) is described, and deficiencies of ladder logic are discussed, and future trends in PLC programming languages and programming tools for real-time control are detailed.
Abstract: The programmable logic controller (PLC) is changing to reflect the demands of sequencing and continuous processing applications. The PLC can be thought of as a hardened computer with high speed I/O and communications ports with bus interfacing for multiprocessor or coprocessor enhancements, and a BIOS that supports those features. PLC manufacturers now retain control over both aspects of the PLC which interoperate in a proprietary manner. Via the delineation of the two may come a standard real time operating system or real time kernel which would give rise to true applications portability. Specific topics addressed include relay ladder logic, a structured approach to error dynamic diagnostic information (EDDI), state control language, and process control function blocks. >

Journal Article
TL;DR: The design, implementation, and performance of a frontal code for the solution of large sparse unsymmetric systems of linear equations is described and the resulting software package, MA42, is included in Release 11 of the Harwell Subroutine Library.
Abstract: We describe the design, implementation, and performance of a frontal code for the solution of large sparse unsymmetric systems of linear equations. The resulting software package, MA42, is included in Release 11 of the Harwell Subroutine Library and is intended to supersede the earlier MA32 package. We discuss in detail design changes from the earlier code, indicating the way in which they aid clarity, maintainability, and portability. The new design also permits extensive use of higher level BLAS kernels, which aid both modularity and efficiency. We illustrate the performance of our new code on practical problems on a CRAY Y-MP, an IBM 3090, and an IBM RISC System/6000. We indicate some directions for future development.

Patent
Marc Sabatella1
19 May 1993
TL;DR: The incremental linker as discussed by the authors uses dynamic linking and loading, where the originally written routines are linked as a dynamically loadable library, and the modified routines are incrementally linked into the program.
Abstract: An incremental linker provides for faster linking and portability to a variety of systems and environmnets. The incremental linker uses dynamic linking and loading wherein the originally written routines are linked as a dynamically loadable library. Routines that are subsequently modified are linked as a separately loaded program that calls the dynamically loadable library. The separately loaded program is loaded first, so any modified routine already present in the separately loaded program will be used in place of an equivalent unmodified routine that is present in the dynamic link library. In this manner, the modified routine is used in place of the unmodified routine, thus the modified routine is incrementally linked into the program.

Journal ArticleDOI
TL;DR: A methodology that works on the notion of reduced dimensionality in the choice of a small set of site-relevant variables is presented and it is contended that this methodology could incorporate model simplicity and site specificity in current estimation models.

Journal ArticleDOI
Michael Franz1
TL;DR: The design of an operating‐system emulator is presented, which provides the services of one operating system on a machine running a different operating system, by mapping the functions of the first onto equivalent calls to the second.
Abstract: In this paper, we present the design of an operating-system emulator. This software interface provides the services of one operating system (Oberon) on a machine running a different operating system (Macintosh), by mapping the functions of the first onto equivalent calls to the second. The construction of this emulator proceeded in four distinct phases, documented here through examples from each of these phases. We believe that our four-phase approach can be beneficial whenever a larger software system needs to be adapted from one architecture onto another. In conclusion, we relate some of the lessons learned and propose guidelines for similar engineering projects.

31 Dec 1993
TL;DR: The design of ScaLAPACK++, an object oriented C++ library for implementing linear algebra computations on distributed memory multicomputers, is described, which will support distributed dense, banded, sparse matrix operations for symmetric, positive-definite, and non-symmetric cases.
Abstract: We describe the design of ScaLAPACK++, an object oriented C++ library for implementing linear algebra computations on distributed memory multicomputers. This package, when complete, will support distributed dense, banded, sparse matrix operations for symmetric, positive-definite, and non-symmetric cases. In ScaLAPACK++ we have employed object oriented design methods to enchance scalability, portability, flexibility, and ease-of-use. We illustrate some of these points by describing the implementation of a right-looking LU factorization for dense systems in ScaLAPACK++.

Patent
28 Apr 1993
TL;DR: The handy information processor as mentioned in this paper eliminates the need to provide a protection cover in particular although the screen of a display means can be protected when not used, and is lightweight, compact, and superior in portability and operability.
Abstract: PURPOSE:To provide the handy information processor which eliminates the need to provide a protection cover in particular although the screen of a display means can be protected when not used, and is lightweight, compact, and superior in portability and operability. CONSTITUTION:For use, a device 21 is mounted on an IJ printer 24 so that the screen of its display 22 is exposed. When the processor is not used, the device 21 is mounted on the IJ printer 24 so that the screen of its display 22 is covered with the IJ printer 24, and then the screen of the display 22 is protected by the IJ printer 24.

Proceedings ArticleDOI
06 Oct 1993
TL;DR: The paper describes the motivation behind the basic concepts of MPI and very briefly summarizes some of its advanced features and outlines an implementation strategy and describes a preliminary portable implementation.
Abstract: We describe an effort to define a standard message-passing interface. The MPI "standard" has now emerged. The paper describes the motivation behind the basic concepts of MPI and very briefly summarizes some of its advanced features. We also outline an implementation strategy and describe a preliminary portable implementation. >

Journal ArticleDOI
TL;DR: The hardware and software issues of the Delta system are discussed and the system's file server, portability, acceptance tests, mode of operation, and national network connections are described.
Abstract: Since 1991, the California Institute of Technology has operated a massively parallel computer system on behalf of the concurrent supercomputing consortium (CSCC). The computer system is a distributed-memory multiple-instruction multiple-data (MIMD) system the nodes of which are connected in a two-dimensional mesh by mesh-routing chips. The system's file server, portability, acceptance tests, mode of operation, and national network connections are described. The hardware and software issues of the Delta system are discussed. >

Book
11 Aug 1993
TL;DR: This handbook should be useful for X and UNIX programmers who want their software to be portable and covers a general explanation of Imake, how towrite and debug an Imakefile, and how to write configuration files.
Abstract: Imake is a utility that works with Make to enable code to be compiled and installed on different UNIX machines. This handbook should be useful for X and UNIX programmers who want their software to be portable. The book covers a general explanation of Imake, how to write and debug an Imakefile, and how to write configuration files. Several sample sets of configuration files are described and are available free over the Net.

Proceedings ArticleDOI
14 Oct 1993
TL;DR: A toolbox approach is taken to address the problems of portability and extensibility of software feedback scheduling mechanisms, developing a toolbox of standard, relatively simple components with well-defined performance and functionality characteristics.
Abstract: Fine-grain scheduling based on software feedback was introduced in the Synthesis operating system to solve two problems: the dependency between jobs in a pipeline and the low-latency requirements of multimedia type applications. The performance level achieved and the adaptiveness of applications running on Synthesis demonstrated the success of fine-grain scheduling based on software feedback. However, the Synthesis implementation of software feedback is specialized for that particular architecture and a particular application (pipelined process scheduling). Consequently, despite the proven success of fine-grain scheduling, it is not easy to port it to another operating system or to apply its lessons elsewhere, even within Synthesis. To address the problems of portability and extensibility of software feedback scheduling mechanisms, we have taken a toolbox approach in our current research. Instead of creating a specialized solution for each particular scheduling problem, we are developing a toolbox of standard, relatively simple components with well-defined performance and functionality characteristics. The goal is the ability to quickly implement sophisticated software feedback mechanisms by composing these basic toolbox components. The intended applications are primarily in the adaptive scheduling needed in multimedia and real-time domains, especially when input/output operations introduce a large variance in job completion time,. >

Journal ArticleDOI
E. Anderson1, Jack Dongarra
01 Aug 1993
TL;DR: The LAPACK project, an effort to produce a numerical linear algebra library that runs efficiently on shared-memory vector and parallel processors, is discussed and results are given for various computers.
Abstract: The LAPACK project, an effort to produce a numerical linear algebra library that runs efficiently on shared-memory vector and parallel processors, is discussed. A description is given of what was done to achieve performance, and results are given for various computers. Future directions for research on parallel computers are also discussed. >

Proceedings ArticleDOI
20 Jul 1993
TL;DR: The authors provide two implementations of Linda in an attempt to support a single high-level programming model on top of the existing paradigms in order to provide a consistent semantics regardless of the underlying model.
Abstract: Facilities such as interprocess communication and protection of shared resources have been added to operating systems to support multiprogramming and have since been adapted to exploit explicit multiprocessing within the scope of two models: the shared-memory model and the distributed (message-passing) model. When multiprocessors (or networks of heterogeneous processors) are used for explicit parallelism, the difference between these models is exposed to the programmer. The p4 tool set was originally developed to buffer the programmer from synchronization issues while offering an added advantage in portability, however two models are often still needed to develop parallel algorithms. The authors provide two implementations of Linda in an attempt to support a single high-level programming model on top of the existing paradigms in order to provide a consistent semantics regardless of the underlying model. Linda's fundamental properties associated with generative communication eliminate the distinction between shared and distributed memory. >


Journal ArticleDOI
TL;DR: The authors have addressed the problem of shared memory architectures and explicitly parallel programs by defining a programming structure that eases the development of effectively portable programs.
Abstract: The tension between software development costs and efficiency is especially high when considering parallel programs intended to run on a variety of architectures. In the domain of shared memory architectures and explicitly parallel programs, the authors have addressed this problem by defining a programming structure that eases the development of effectively portable programs. On each target multiprocessor, an effectively portable program runs almost as efficiently as a program fine-tuned for that machine. Additionally, its software development cost is close to that of a single program that is portable across the targets. Using this model, programs are defined in terms of data structure and partitioning-scheduling abstractions. Low software development cost is attained by writing source programs in terms of abstract interfaces and thereby requiring minimal modification to port; high performance is attained by matching (often dynamically) the interfaces to implementations that are most appropriate to the execution environment. The authors include results of a prototype used to evaluate the benefits and costs of this approach. >

11 Jan 1993
TL;DR: This dissertation addresses the problem of facilitating the development of efficiently executing programs for multiple-instruction multi-datastream (MIMD) parallel computers by raising the level of abstraction at which parallel program structures are expressed and moving to a compositional approach to programming.
Abstract: This dissertation addresses the problem of facilitating the development of efficiently executing programs for multiple-instruction multi-datastream (MIMD) parallel computers. It is difficult to write programs which are both correct and efficient even for a single MIMD parallel architecture. A program which is efficient in execution on one member of this architecture class is often either not portable at all to different members of the architecture class, or if portability is possible, the efficiency attained is usually not satisfactory on any architecture. The conceptual basis of the approach we have taken to providing a solution for the problem of programming MIMD parallel architectures is based upon raising the level of abstraction at which parallel program structures are expressed and moving to a compositional approach to programming. The CODE 2.0 model of parallel programming permits parallel programs to be created by composing basic units of computation and defining relationships among them. It expresses the communication and synchronization relationships of units of computation as abstract dependencies. Ready access to these abstractions is provided by a flexible graphical interface in which the user can specify them in terms of extended directed graphs. Both ease of preparation of correct programs and compilation to efficient execution on multiple target architectures is enabled. The compositional approach to programming focuses the programmer's attention upon the structure of the program, rather than development of small unit transformations. In the CODE 2.0 system, the units of computation are prepared using conventional sequential programming languages along with declaratively specified conditions under which the unit is enabled for execution. The system is built upon a unique object-oriented model of compilation in which communication and synchronization mechanisms are implemented by parameterized class templates which are used to custom tailor the translation of abstract specifications in communication and synchronization to efficient local models. (Abstract shortened by UMI.)