scispace - formally typeset
Search or ask a question

Showing papers on "Software portability published in 1996"


Journal ArticleDOI
TL;DR: In this article, the authors discuss their experience designing and implementing a statistical computing language, which combines what they felt were useful features from two existing computer languages, and they feel that the new language provides advantages in the areas of portability, computational efficiency, memory management, and scope.
Abstract: In this article we discuss our experience designing and implementing a statistical computing language. In developing this new language, we sought to combine what we felt were useful features from two existing computer languages. We feel that the new language provides advantages in the areas of portability, computational efficiency, memory management, and scoping.

9,446 citations


Book
01 Jan 1996
TL;DR: MPI: The Complete Reference is an annotated manual for the latest 1.1 version of the standard that illuminates the more advanced and subtle features of MPI and covers such advanced issues in parallel computing and programming as true portability, deadlock, high-performance message passing, and libraries for distributed and parallel computing.
Abstract: From the Publisher: MPI, the Message Passing Interface, is a standard and portable library of communications subroutines for parallel programming designed to function on a wide variety of parallel computers. It is useful on both parallel computers, such as IBM's SP2, the Cray ResearchT3D, and the Connection Machine, as well as networks of workstations. Written by five of the principal creators of the latest MPI standard MPI: The Complete Reference is an annotated manual for the latest 1.1 version of the standard that illuminates the more advanced and subtle features of MPI. It can be read in conjunction with the companion tutorial volume, Using MPI: Portable Parallel Programming with the Message-Passing Interface, by William Gropp, Ewing Lusk, and Anthony Skjellum. MPI: The Complete Reference is the only source that covers such advanced issues in parallel computing and programming as true portability, deadlock, high-performance message passing, and libraries for distributed and parallel computing. The annotations provide numerous illustrative programming examples and delve into even the most esoteric features or consequences of the standard. They explain why certain design choices were made, how users should use the interface, and how implementors should construct their own version of MPI. Scientific and Engineering Computation series

2,635 citations


Journal ArticleDOI
01 Sep 1996
TL;DR: The MPI Message Passing Interface (MPI) as mentioned in this paper is a standard library for message passing that was defined by the MPI Forum, a broadly based group of parallel computer vendors, library writers, and applications specialists.
Abstract: MPI (Message Passing Interface) is a specification for a standard library for message passing that was defined by the MPI Forum, a broadly based group of parallel computer vendors, library writers, and applications specialists. Multiple implementations of MPI have been developed. In this paper, we describe MPICH, unique among existing implementations in its design goal of combining portability with high performance. We document its portability and performance and describe the architecture by which these features are simultaneously achieved. We also discuss the set of tools that accompany the free distribution of MPICH, which constitute the beginnings of a portable parallel programming environment. A project of this scope inevitably imparts lessons about parallel computing, the specification being followed, the current hardware and software environment for parallel computing, and project management; we describe those we have learned. Finally, we discuss future developments for MPICH, including those necessary to accommodate extensions to the MPI Standard now being contemplated by the MPI Forum.

2,082 citations


01 Jan 1996
TL;DR: The concepts discussed are appropriate for all scalable computing systems and provide many of the data structures and numerical kernels required for the scalable solution of PDEs, offering performance portability.
Abstract: Parallel numerical software based on the message passing model is enormously complicated. This paper introduces a set of techniques to manage the complexity, while maintaining high efficiency and ease of use. The PETSc 2.0 package uses object-oriented programming to conceal the details of the message passing, without concealing the parallelism, in a high-quality set of numerical software libraries. In fact, the programming model used by PETSc is also the most appropriate for NUMA shared-memory machines, since they require the same careful attention to memory hierarchies as do distributed-memory machines. Thus, the concepts discussed are appropriate for all scalable computing systems. The PETSc libraries provide many of the data structures and numerical kernels required for the scalable solution of PDEs, offering performance portability.

1,817 citations


Journal ArticleDOI
TL;DR: This work describes an alternative, programmer-centric view of relaxed consistency models that describes them in terms of program behavior, not system optimizations, and most of these models emphasize the system optimizations they support.
Abstract: The memory consistency model of a system affects performance, programmability, and portability. We aim to describe memory consistency models in a way that most computer professionals would understand. This is important if the performance-enhancing features being incorporated by system designers are to be correctly and widely used by programmers. Our focus is consistency models proposed for hardware-based shared memory systems. Most of these models emphasize the system optimizations they support, and we retain this system-centric emphasis. We also describe an alternative, programmer-centric view of relaxed consistency models that describes them in terms of program behavior, not system optimizations.

1,213 citations


Journal ArticleDOI
TL;DR: The key concept of GAs is that they provide a portable interface through which each process in a MIMD parallel program can asynchronously access logical blocks of physically distributed matrices, with no need for explicit cooperation by other processes.
Abstract: Portability, efficiency, and ease of coding are all important considerations in choosing the programming model for a scalable parallel application. The message-passing programming model is widely used because of its portability, yet some applications are too complex to code in it while also trying to maintain a balanced computation load and avoid redundant computations. The shared-memory programming model simplifies coding, but it is not portable and often provides little control over interprocessor data transfer costs. This paper describes an approach, called Global Arrays (GAs), that combines the better features of both other models, leading to both simple coding and efficient execution. The key concept of GAs is that they provide a portable interface through which each process in a MIMD parallel program can asynchronously access logical blocks of physically distributed matrices, with no need for explicit cooperation by other processes. We have implemented the GA library on a variety of computer systems, including the Intel Delta and Paragon, the IBM SP-1 and SP-2 (all message passers), the Kendall Square Research KSR-1/2 and the Convex SPP-1200 (nonuniform access shared-memory machines), the CRAY T3D (a globally addressable distributed-memory computer), and networks of UNIX workstations. We discuss the design and implementation of these libraries, report their performance, illustrate the use of GAs in the context of computational chemistry applications, and describe the use of a GA performance visualization tool.

354 citations


Journal ArticleDOI
Gregor Kiczales1
TL;DR: This paper provides some ideas to spark further debate on black-box abstraction, a basic tenet of software design, underlying approaches to portability and reuse.
Abstract: Encapsulation, informally known as black-box abstraction, is a widely known and accepted principle. It is a basic tenet of software design, underlying approaches to portability and reuse. However, many practitioners find themselves violating it in order to achieve performance requirements in a practical manner. The gap between theory and practice must be filled. Open implementation is a controversial new approach that claims to do just that. The paper provides some ideas to spark further debate on black-box abstraction.

250 citations


Patent
Carlos Dangelo1
10 Jun 1996
TL;DR: An object-oriented, multi-media architecture as discussed by the authors provides real-time processing of an incoming stream of pseudo-language byte codes compiled from an OO source program, including a plurality of processors arranged for parallel processing.
Abstract: An object-oriented, multi-media architecture provides for real-time processing of an incoming stream of pseudo-language byte codes compiled from an object-oriented source program The architecture includes a plurality of processors arranged for parallel processing At least some of the processors are especially adapted or optimized for execution of multi-media methods such as video decompression, inverse discrete cosine transformation, motion estimation and the like The architecture further includes a virtual machine computer program that reconstructs objects and threads from the byte code stream, and routes each of them to the appropriate hardware resource for parallel processing This architecture extends the object-oriented paradigm through the operating system and execution hardware of a client machine to provide the advantages of dedicated/parallel processors while preserving portability of the pseudo-language environment

231 citations


Proceedings ArticleDOI
27 Oct 1996
TL;DR: The Visualization Toolkit (vtk) as mentioned in this paper is a C++ class library for 3D graphics and visualization, which includes object oriented models for graphics, methods for synchronizing system execution; a summary of data representation schemes; the role of C++; issues in portability across PC and Unix systems; and how to automatically wrap the C++class library with interpreted languages such as Java and Tcl.
Abstract: The Visualization Toolkit (vtk) is a freely available C++ class library for 3D graphics and visualization. We describe core characteristics of the toolkit. This includes a description of object oriented models for graphics and visualization; methods for synchronizing system execution; a summary of data representation schemes; the role of C++; issues in portability across PC and Unix systems; and how we automatically wrap the C++ class library with interpreted languages such as Java and Tcl. We also demonstrate the capabilities of the system for scalar, vector, tensor, and other visualization techniques.

198 citations


Patent
Michael G. McKenna1
21 May 1996
TL;DR: In this article, a system providing improved NLS in application programs is described, which employs normalized Unicode data with generic transformation structures having locale overlays, for effecting various transformation processes using locale-specific information.
Abstract: A system providing improved National Language Support (NLS) in application programs is described. The system employs normalized Unicode data with generic transformation structures having locale overlays. Methods are described for navigating the structures during system operation, for effecting various transformation processes using locale-specific information. The locale-specific information is maintained in the structures as external data files. Since the data files are read in at runtime, the underlying binary files which comprise the program need not be modified for updating the program to support a new locale. The approach provides extensibility to applications with National Language Support. Additionally, increased portability is provided, since manipulation of the underlying data remains unchanged regardless of the underlying platform. Program maintenance is also decreased, since engineers need only maintain a single core.

173 citations


Proceedings ArticleDOI
01 May 1996
TL;DR: To improve the performance of a parallel Haskell program GUM provides tools for monitoring and visualising the behaviour of threads and of processors during execution.
Abstract: GUM is a portable, parallel implementation of the Haskell functional language. Despite sustained research interest in parallel functional programming, GUM is one of the first such systems to be made publicly available.GUM is message-based, and portability is facilitated by using the PVM communications harness that is available on many multi-processors. As a result, GUM is available for both shared-memory (Sun SPARCserver multiprocessors) and distributed-memory (networks of workstations) architectures. The high message-latency of distributed machines is ameliorated by sending messages asynchronously, and by sending large packets of related data in each message.Initial performance figures demonstrate absolute speedups relative to the best sequential compiler technology. To improve the performance of a parallel Haskell program GUM provides tools for monitoring and visualising the behaviour of threads and of processors during execution.

Proceedings ArticleDOI
25 Jun 1996
TL;DR: ORCHESTRA, a portable fault injection environment for testing implementations of distributed protocols, is reported on, based on a simple yet powerful framework called script-driven probing and fault injection, for the evaluation and validation of the fault-tolerance and timing characteristics of distributing protocols.
Abstract: As software for distributed systems becomes more complex, ensuring that a system meets its prescribed specification is a growing challenge that confronts software developers. This is particularly important for distributed applications with strict dependability and timeliness constraints. This paper reports on ORCHESTRA, a portable fault injection environment for testing implementations of distributed protocols. This tool is based on a simple yet powerful framework called script-driven probing and fault injection, for the evaluation and validation of the fault-tolerance and timing characteristics of distributed protocols. The tool, which was initially developed on the Real-Time Mach operating system and later ported to other platforms including Solaris and SunOS, has been used to conduct extensive experiments on several protocol implementations. This paper describes the design and implementation of the fault injection tool focusing on architectural features to support portability, minimizing intrusiveness on target protocols, and explicit support for testing real-time systems. The paper also describes the experimental evaluation of two protocol implementations: a real-time audio-conferencing application on Real-Time Mach, and a distributed group membership service on the Sun Solaris operating system.

Proceedings ArticleDOI
01 May 1996
TL;DR: Omniware uses software fault isolation, a technology developed to provide safe extension code for databases and operating systems, to achieve a unique combination of language-independence and excellent performance.
Abstract: This paper evaluates the design and implementation of Omniware: a safe, efficient, and language-independent system for executing mobile program modules. Previous approaches to implementing mobile code rely on either language semantics or abstract machine interpretation to enforce safety. In the former case, the mobile code system sacrifices universality to gain safety by dictating a particular source language or type system. In the latter case, the mobile code system sacrifices performance to gain safety through abstract machine interpretation.Omniware uses software fault isolation, a technology developed to provide safe extension code for databases and operating systems, to achieve a unique combination of language-independence and excellent performance. Software fault isolation uses only the semantics of the underlying processor to determine whether a mobile code module can corrupt its execution environment. This separation of programming language implementation from program module safety enables our mobile code system to use a radically simplified virtual machine as its basis for portability. We measured the performance of Omniware using a suite of four SPEC92 programs on the Pentium, PowerPC, Mips, and Sparc processor architectures. Including the overhead for enforcing safety on all four processors, OmniVM executed the benchmark programs within 21% as fast as the optimized, unsafe code produced by the vendor-supplied compiler.

Proceedings ArticleDOI
01 Sep 1996
TL;DR: This paper examines interpreter performance by measuring and analyzing interpreters from both software and hardware perspectives and shows that interpreter performance is primarily a function of the interpreter itself and is relatively independent of the application being interpreted.
Abstract: Interpreted languages have become increasingly popular due to demands for rapid program development, ease of use, portability, and safety. Beyond the general impression that they are "slow," however, little has been documented about the performance of interpreters as a class of applications.This paper examines interpreter performance by measuring and analyzing interpreters from both software and hardware perspectives. As examples, we measure the MIPSI, Java, Perl, and Tcl interpreters running an array of micro and macro benchmarks on a DEC Alpha platform. Our measurements of these interpreters relate performance to the complexity of the interpreter's virtual machine and demonstrate that native runtime libraries can play a key role in providing good performance. From an architectural perspective, we show that interpreter performance is primarily a function of the interpreter itself and is relatively independent of the application being interpreted. We also demonstrate that high-level interpreters' demands on processor resources are comparable to those of other complex compiled programs, such as gcc. We conclude that interpreters, as a class of applications, do not currently motivate special hardware support for increased performance.

Proceedings ArticleDOI
24 Jun 1996
TL;DR: The preliminary results suggest that the BSP model can be used to develop efficient and portable programs for a range of machines and applications.
Abstract: The Bulk-Synchronous Parallel (BSP) model was proposed by Valiant as a model for general-purpose parallel computation. The objective of the model is to allow the design of parallel programs that can be executed efficiently on a variety of architectures. While many theoretical arguments in support of the BSP model have been presented, the degree to which the model can be efficiently utilized on existing parallel machines remains unclear. To explore this question, we implemented s small library of BSP functions, called the Green BSP library, on several parallel platfotms. We also created a number of parallel applications based on this library. Here, we report on the performance of six of these applications on three different parallel platforms. Our preliminary results suggest that the BSP model can be used to develop efficient and portable programs for a range of machines and applications.

Journal ArticleDOI
TL;DR: The Metacomputer Adaptive Runtime System (MARS), a framework for minimizing the execution time of distributed applications on a WAN metacomputers, and uses accumulated statistical data on previous execution runs of the same application to derive an improved task-to-process mapping.

Patent
Michael G. McKenna1
21 May 1996
TL;DR: In this article, a system providing improved NLS in application programs is described, which employs normalized Unicode data with generic transformation structures having locale overlays, for effecting various transformation processes using locale-specific information.
Abstract: A system providing improved National Language Support (NLS) in application programs is described. The system employs normalized Unicode data with generic transformation structures having locale overlays. Methods are described for navigating the structures during system operation, for effecting various transformation processes using locale-specific information. The locale-specific information is maintained in the structures as external data files. Since the data files are read in at runtime, the underlying binary files which comprise the program need not be modified for updating the program to support a new locale. The approach provides extensibility to applications with National Language Support. Additionally, increased portability is provided, since manipulation of the underlying data remains unchanged regardless of the underlying platform. Program maintenance is also decreased, since engineers need only maintain a single core.

Patent
Peter K. Edberg1
10 May 1996
TL;DR: In this paper, the authors present a system and method for organizing information to perform accurate and efficient collation for information such as languages of various nationalities and regions, which provides a number of improvements over the existing string comparison routines: portability, improved performance, ability to handle Unicode and improved linguistic capability.
Abstract: According to the system and method disclosed herein, the present invention provides a system and method for organizing information to perform accurate and efficient collation for information such as languages of various nationalities and regions. This invention provides a number of improvements over the existing string comparison routines: portability, improved performance, ability to handle Unicode, and improved linguistic capability.

Proceedings ArticleDOI
18 Nov 1996
TL;DR: This paper illustrates the performance limitations of existing CORBA implementations in terms of their support for the dynamic invocation interface (DII) and the dynamic skeleton interface (DSI) and indicates that object request broker implementers must optimize both the DII and DSI significantly to be suitable for performance-sensitive applications on high-speed networks.
Abstract: The common object request broker architecture (CORBA) is intended to simplify the task of developing distributed applications. Although it is well-suited for conventional remote procedure call style applications, several limitations become evident when CORBA is used for a broader range of performance-sensitive applications running in heterogeneous environments over high-speed networks. This paper illustrates the performance limitations of existing CORBA implementations in terms of their support for the dynamic invocation interface (DII) and the dynamic skeleton interface (DSI). The results indicate that object request broker implementers must optimize both the DII and DSI significantly before CORBA will be suitable for performance-sensitive applications on high-speed networks. In addition, the CORBA 2.0 DII specification must be clarified in order to ensure application portability and optimal performance.

Proceedings ArticleDOI
01 May 1996
TL;DR: A reference architecture for STEs is proposed and analytical value is demonstrated by using SAAM (Software Architectural Analysis Method) to compare three software test environments: PROTest II (PROLOG Test Environment, Version II), TAOS (Testing with Analysis and Oracle Support), and CITE (CONVEX Integrated Test Environment).
Abstract: Software test environments (STEs) provide a means of automating the test process and integrating testing tools to support required testing capabilities across the test process. Specifically, STEs may support test planning, test management, test measurement, test failure analysis, test development and test execution. The software architecture of an STE describes the allocation of the environment's functions to specific implementation structures. An STE's architecture can facilitate or impede modifications such as changes to processing algorithms, data representation or functionality. Performance and reusability are also subject to architecturally imposed constraints. Evaluation of an STE's architecture can provide insight into modifiability, extensibility, portability and reusability of the STE. This paper proposes a reference architecture for STEs. Its analytical value is demonstrated by using SAAM (Software Architectural Analysis Method) to compare three software test environments: PROTest II (PROLOG Test Environment, Version II), TAOS (Testing with Analysis and Oracle Support), and CITE (CONVEX Integrated Test Environment).

Journal ArticleDOI
TL;DR: The article describes the design, implementation, and evaluation of the software network and application services that support the InfoPad terminal, a low-power, lightweight wireless multimedia terminal that operates in indoor environments and supports a high density of users.
Abstract: Some of the most important trends in computer systems are the emerging use of multimedia Internet services, the popularity of portable computing, and the development of wireless data communications. The primary goal of the InfoPad project is to combine these trends to create a system that provides ubiquitous information access. The system is built around a low-power, lightweight wireless multimedia terminal that operates in indoor environments and supports a high density of users. The InfoPad system uses a number of innovative techniques to provide the high-bandwidth connectivity, portability, and user interface needed for this environment. The article describes the design, implementation, and evaluation of the software network and application services that support the InfoPad terminal. Special applications, type servers, and recognizers are developed for the InfoPad system. This software is designed to take advantage of the multimedia capabilities of the portable terminal and the additional computational resources available on the servers. The InfoNet system provides low-latency, high bandwidth connectivity between the computation and the portable terminal. It also provides the routing and handoff support that allows users to roam freely. The performance measurements of the system show that this design is a viable alternative, especially in the indoor environment.

Journal ArticleDOI
Nick N. Duan1
01 May 1996
TL;DR: A methodology using Java and HORB for developing database applications is proposed, with the objective to establish a robust Web infrastructure in a corporate environment.
Abstract: Most of the Java applets on the Web today are developed primarily for visualization and 3D interactive animation. Serious doubts have been raised about the feasibility of using Java in the domain of enterprise applications. Compared with the conventional CGI-based approach for developing database applications, Java-based approach provides a high degree of flexibility, scalability, portability and robustness. Through the use of HORB, a software tool based on the concept of Object Request Broker, Java client and server objects can be created easily and accessed transparently. A methodology using Java and HORB for developing database applications is proposed, with the objective to establish a robust Web infrastructure in a corporate environment.

Patent
31 May 1996
TL;DR: In this paper, a multi-layer down-load protocol for wireless networks is proposed, which includes a number of independent protocol layers, preferably operating a master-slave configuration, each layer controls respective sequence numbers to ensure system integrity.
Abstract: Software is down-loaded from a central station of a wireless telecommunications system to a remote subscriber station for configuring the remote subscriber station to permit wireless connection of user telecommunications equipment at the remote subscriber station to the central station. A multi-layer down-load protocol includes a number of independent protocol layers, preferably operating a master-slave configuration. Each layer controls respective sequence numbers to ensure system integrity. Control software is arranged with a device independent boot-strap and a set of device specific external service parameters to provide portability.

Journal ArticleDOI
01 Dec 1996
TL;DR: It is shown that it is indeed possible to produce fully portable parallel software which will run with highly efficient, scalable and predictable performance on any general purpose parallel architecture.
Abstract: General purpose parallel computing systems come in a variety of forms. We have various kinds of distributed memory architectures, shared memory multiprocessors, and clusters of workstations. New technologies may increase this range still further. Can one hope to design portable and scalable parallel software in the face of such architectural diversity? In this paper we show that it is indeed possible to produce fully portable parallel software which will run with highly efficient, scalable and predictable performance on any general purpose parallel architecture. The approach we describe is based on the bulk synchronous parallel (BSP) model of computation. The BSP model provides a simple, unified framework for the design and programming of all kinds of general purpose parallel systems. Over the last few years, a number of important research activities in algorithms and architectures have been pursued as part of this new approach to scalable parallel computing. In this paper we give some simple BSP algorithms and show how they can be expressed as programs. We also briefly describe some of the BSP programming language developments which are now being pursued.

Proceedings ArticleDOI
03 Oct 1996
TL;DR: A novel framework of robust speech understanding is presented, based on a detection and verification strategy that extracts the semantically significant parts and rejects the irrelevant parts rather than decoding the whole utterances.
Abstract: A novel framework of robust speech understanding is presented. It is based on a detection and verification strategy. It extracts the semantically significant parts and rejects the irrelevant parts rather than decoding the whole utterances. There are two key features in the strategy. Firstly, the discriminative verifier is integrated to suppress false alarms. It uses anti-subword models specifically trained to verify the recognition results. The second feature is the use of a key-phrase network as the detection unit. It embeds a stochastic constraint of keyword and key-phrase connections to improve the coverage and detection rates. The automatic generation of the key-phrase network structure is also addressed. This top-down variable-length language model can be trained with a small corpus and ported to different tasks. This property coupled with the vocabulary-independent detector and verifier enhances the portability of the framework.

Proceedings Article
01 Jan 1996
TL;DR: The paper describes the experiences in implementing four parallel programming systems using Panda and it evaluates the performance of the Panda-based implementations.
Abstract: Panda is a virtual machine designed to support portable implementations of parallel programming systems. It provides communication primitives and thread support to higher-level layers (such as a runtime system). We have used Panda to implement four parallel programming systems: Orca, data parallel Orca, PVM, and SR. The paper describes our experiences in implementing these systems using Panda and it evaluates the performance of the Panda-based implementations.

Patent
01 Oct 1996
TL;DR: In this article, the authors proposed a method for updating two-tiered databases in a telecommunications system which support local number portability through call connection information sets stored on the databases. But this method requires the second-tier databases to accept the set as an update, a new set or reject.
Abstract: This invention provides a means and method for updating two-tiered databases in a telecommunications system which support local number portability through call connection information sets stored on the databases. Pro-active updating is accomplished by tracking location, time and frequency of each switch querying the first tier centralized database for each stored call connection information set. At the time an update is made to a call connection information set at the first tier database, the set is offered to all second tier databases supporting individual switches which have queried the centralized database for that set. Second tier databases accept the set as an update, a new set or reject. Acceptance of a new set or rejection is dependent upon the set achieving a ranking based on recency and frequency of query above the threshold for storage. The second tier databases provide confirmation of set acceptance and rejection to the first tier database.

Patent
S. Pun Sherman1
27 Jun 1996
TL;DR: In this article, the authors propose a device driver interface (DDI, 118) for achieving portability of device drivers for operating with full source level compatibility across multiple instruction set architectures and platforms.
Abstract: A device driver interface (DDI, 118) for achieving portability of device drivers (110) for operating with full source level compatibility across multiple instruction set architectures and platforms. The device driver interface (DDI, 118) makes transparent to the driver (110) the actual data access mechanisms of the host computers (108, 130) on which the driver (110) is compiled.

01 Jan 1996
TL;DR: Development of Object-Oriented Multimedia Software by Philipp Ackermann: Implementation and Cross-Platform Portability 5.1.1 Integration Concepts 5.2.2 3D Object Picking 5.3.3 Synchronized Screen Update 5.4 Programming 3D Graphics Applications 5.5.1 3D Objects and 3D Views.
Abstract: Developing Object-Oriented Multimedia Software Contents Developing Object-Oriented Multimedia Software by Philipp Ackermann 1 Introduction 1.1 What is Multimedia? 1.1.1 Integration and Interaction with Continuous Media 1.1.2 Multimedia Hardware 1.1.3 Multimedia Applications 1.2 Developing Multimedia Applications 1.2.1 Problems of Multimedia Productions 1.2.2 Developing Multimedia Software 1.2.3 Problems of Current Multimedia Toolkits 2 Software Engineering Aspects 2.1 Design Considerations 2.1.1 Creating Models with Computers 2.1.2 Modeling through Programming 2.1.3 Programming Methods 2.1.4 The Role of Evolutionary Prototypes 2.1.5 The Software Development Process 2.2 Object-Oriented Software Development 2.2.1 Object-Oriented Design 2.2.2 Object-Oriented Programming Languages 2.2.3 Class Libraries and Building Blocks 2.2.4 Design Patterns 2.2.5 Frameworks 2.2.6 Application Frameworks 2.2.7 Components and Run-time Environments 3 The MET++ Multimedia Application Framework 3.1 The ET++ Application Framework 3.1.1 Building Blocks 3.1.2 Application Framework and User Interaction Elements 3.1.3 Portability Layer 3.1.4 Data and Converter Framework 3.1.5 Exploration Environment 3.2 The MET++ Multimedia Framework Extensions 3.2.1 System Overview 3.2.2 Hardware Platforms of the Development Environment 3.2.3 3D Graphics 3.2.4 Audio and Music 3.2.5 Video 3.2.6 Time Synchronization 3.2.7 Hyperlinks 3.2.8 File Converters 4 The Time Synchronization Framework 4.1 Multimedia Synchronization 4.2 Specification of Media Presentations 4.2.1 Temporal Specification 4.2.2 Object-Based Event Composition 4.3 Time Events 4.3.1 Basic Event Class 4.3.2 Grouping of Events 4.3.3 Grouping Classes with Temporal Layout Strategies 4.4 Time-dynamic Media Objects 4.4.1 Temporal Wrappers 4.4.2 Time Functions 4.4.3 Local Time Warping 4.5 Real-time Presentation 4.5.1 Real-Time Clock 4.5.2 Presentation Time 4.5.3 Interactive Control of Presentations 4.6 Visualization of Temporal Structures 4.6.1 Event Graph 4.6.2 Time Composition View 4.7 Direct Manipulation of Temporal Structures 4.7.1 Time Function Editing 4.7.2 Event Transformations and Grouping Color Plates 5 The Graphics and Animation Framework 5.1 Integrating 2D and 3D Graphics 5.1.1 Integration Concepts 5.1.2 Implementation and Cross-Platform Portability 5.1.3 3-Dimensional Views 5.1.4 Camera Manipulations 5.2 3D Models 5.2.1 3D Objects and 3D Views 5.2.2 3D Object Picking 5.2.3 Handles for 3D Object Manipulations 5.2.4 Programming 3D Graphics Applications 5.3 Animations 5.3.1 Keyframe Animations 5.3.2 Animated 2D Graphics 5.3.3 Synchronized Screen Update 5.3.4 Animated 3D Graphics 5.3.5 Direct Manipulation of Trajectories 6 The Audio and Music Framework 6.1 Digital Audio Processing 6.1.1 Audio Signal Flow 6.1.2 Audio Input/Output 6.1.3 Audio Samples in Memory and Files 6.1.4 CD-Audio and Digital Audio Tape 6.1.5 Sound Generators 6.1.6 Filters and Effects 6.1.7 Audio Flow Programming 6.1.8 Interactive Patch Editor 6.2 Musical Structures 6.2.1 Musical Interpretation Context 6.2.2 Notes and chords 6.2.3 Musical Performance 6.2.4 Common Music Notation 6.2.5 Music Instruments 6.2.6 Score Converters 7 The Image and Video Framework 7.1 Image Processing 7.1.1 Image Display and Data Representation 7.1.2 Image Editing 7.2 Video Integration 7.2.1 Video Processing Layers 7.2.2 Video Signal Flow 7.2.3 Video Input 7.2.4 Video Display 7.2.5 Video Compression 7.2.6 Movie Playback and Recording 7.2.7 Off-line Video Post-Processing 7.2.8 Network Video Transmission 7.3 Remote Video Control 7.3.1 Remote Video Preview 7.3.2 Remote Video Recording 8 Rapid Application Development by Reusing Frameworks 8.1 Multimedia 8.1.1 The medit Multimedia Editor 8.1.2 The Skydiving Tutorial 8.2 Hypermedia 8.2.1 Hyperlinking 8.2.2 Embedded Documents 8.2.3 The mwrite Hypertext Editor 8.2.4 Online Help 8.2.5 World Wide Web Browser 8.3 Augmented User Interfaces and Metaphors 8.3.1 Three-Dimensional User Interface Components 8.3.2 Audio Feedback 8.3.3 Audio Mixing Front-Ends 8.4 Scientific Visualization 8.4.1 Geoid 8.4.2 Medical Data 8.4.3 Simulation Models 8.4.4 Data Visualization and Sonification 8.5 Visual Programming Environment 8.5.1 User Interface Builder 8.5.2 Executable Documents 9 Summary 9.1 Conclusion 9.2 Consequences 9.3 Outlook Appendix A The C++ Programming Language A.1 C++ Notation A.2 Graphical Class and Object Diagrams A.3 Naming Conventions and C++ Coding Guidelines A.4 C++ Source Code Excerpts Appendix B MET++ Software Distribution B.1 Availability B.2 Installation B.2.1 Platforms B.2.2 Directory Structure B.2.3 Installation Procedure B.3 The ETRC Resource File List of Classes List of Programs Bibliography Index

Proceedings ArticleDOI
12 Aug 1996
TL;DR: After a study of application codes, it was concluded that by adding a few new techniques to current compilers, automatic parallelization becomes feasible for a range of whole applications.
Abstract: The ability to automatically parallelize standard programming languages results in program portability across a wide range of machine architectures. It is the goal of the Polaris project to develop a new parallelizing compiler that overcomes limitations of current compilers. While current parallelizing compilers may succeed on small kernels, they often fail to extract any meaningful parallelism from whole applications. After a study of application codes, it was concluded that by adding a few new techniques to current compilers, automatic parallelization becomes feasible for a range of whole applications. The techniques needed are interprocedural analysis, scalar and array privatization, symbolic dependence analysis, and advanced induction and reduction recognition and elimination, along with run-time techniques to permit the parallelization of loops with unknown dependence relations.