scispace - formally typeset
Search or ask a question

Showing papers on "Application software published in 1995"


Journal ArticleDOI
TL;DR: The 4+1 View Model organizes a description of a software architecture using five concurrent views, each of which addresses a specific set of concerns.
Abstract: The 4+1 View Model organizes a description of a software architecture using five concurrent views, each of which addresses a specific set of concerns. Architects capture their design decisions in four views and use the fifth view to illustrate and validate them. The logical view describes the design's object model when an object-oriented design method is used. To design an application that is very data driven, you can use an alternative approach to develop some other form of logical view, such as an entity-relationship diagram. The process view describes the design's concurrency and synchronization aspects. The physical view describes the mapping of the software onto the hardware and reflects its distributed aspect. The development view describes the software's static organization in its development environment. >

2,177 citations


Proceedings ArticleDOI
27 Jun 1995
TL;DR: A model for analyzing software rejuvenation in continuously-running applications is presented and express downtime and costs due to downtime during rejuvenations in terms of the parameters in that model and Threshold conditions for rejuvenation to be beneficial are derived.
Abstract: Software rejuvenation is the concept of gracefully terminating an application and immediately restarting it at a clean internal state. In a client-server type of application where the server is intended to ran perpetually for providing a service to its clients, rejuvenating the server process periodically during the most idle time of the server increases the availability of that service. In a long-running computation-intensive application, rejuvenating the application periodically and restarting it at a previous checkpoint increases the likelihood of successfully completing the application execution. We present a model for analyzing software rejuvenation in such continuously-running applications and express downtime and costs due to downtime during rejuvenation in terms of the parameters in that model. Threshold conditions for rejuvenation to be beneficial are also derived. We implemented a reusable module to perform software rejuvenation. That module can be embedded in any existing application on a UNIX platform with minimal effort. Experiences with software rejuvenation in a billing data collection subsystem of a telecommunications operations system and other continuously-running systems and scientific applications in AT&T are described. >

936 citations


Proceedings ArticleDOI
23 Apr 1995
TL;DR: A novel architectural style directed at supporting larger grain reuse and coherent system composition is presented, which supports design of distributed, concurrent, applications.
Abstract: While a large fraction of application code is devoted to graphical user interface (GUI) functions, support for reuse in this domain has largely been confined to the creation of GUI toolkits ("widgets"). We present a novel architectural style directed at supporting larger grain reuse and flexible system composition. Moreover, the style supports design of distributed, concurrent applications. Asynchronous notification messages and asynchronous request messages are the sole basis for intercomponent communication. A key aspect of the style is that components are not built with any dependencies on what typically would be considered lower-level components, such as user interface toolkits. Indeed, all components are oblivious to the existence of any components to which notification messages are sent. While our focus has been on applications involving graphical user interfaces, the style has the potential for broader applicability. Several trial applications using the style are described.

563 citations


Journal ArticleDOI
TL;DR: The methodology and guidelines for the design of flexible software based fault and error injection are described and a tool, FERRARI, that incorporates the techniques are presented that demonstrates the effectiveness of the software-based error injection tool in evaluating the dependability properties of complex systems.
Abstract: A major step toward the development of fault-tolerant computer systems is the validation of the dependability properties of these systems. Fault/error injection has been recognized as a powerful approach to validate the fault tolerance mechanisms of a system and to obtain statistics on parameters such as coverages and latencies. This paper describes the methodology and guidelines for the design of flexible software based fault and error injection and presents a tool, FERRARI, that incorporates the techniques. The techniques used to emulate transient errors and permanent faults in software are described in detail. Experimental results are presented for several error detection techniques, and they demonstrate the effectiveness of the software-based error injection tool in evaluating the dependability properties of complex systems. >

370 citations


Proceedings ArticleDOI
23 Apr 1995
TL;DR: A survey of a variety of software systems used in industrial applications found that software architecture is concerned with capturing the structures of a system and the relationships among the elements both within and between structures.
Abstract: To help us identify and focus on pragmatic and concrete issues related to the role of software architecture in large systems, we conducted a survey of a variety of software systems used in industrial applications. Our premise, which guided the examination of these systems, was that software architecture is concerned with capturing the structures of a system and the relationships among the elements both within and between structures. The structures we found fell into several broad categories: conceptual architecture, module interconnection architecture, code architecture, and execution architecture. These categories address different engineering concerns. The separation of such concerns, combined with specialized implementation techniques, decreased the complexity of implementation, and improved reuse and reconfiguration.

285 citations


Book
01 Jul 1995
TL;DR: About Face, respected software designer Alan Cooper shares his own real-world experience and design principles so that you, too, can fashion intuitive, effective user interfaces.
Abstract: From the Publisher: The cleverest code in the world is worth nothing if a program's interface proves an unwieldy barrier to users That's why programmers and designers alike will benefit from About Face: The Essentials of User Interface Design Here, respected software designer Alan Cooper shares his own real-world experience and design principles so that you, too, can fashion intuitive, effective user interfaces Applicable to multimedia and Web sites as well as application software, About Face is an invaluable resource for design professionals

271 citations


Proceedings ArticleDOI
05 Apr 1995
TL;DR: Details on the design, architectural features and applications of a real-time digital simulator (RTDSm) developed at the Manitoba HVDC Research Centre (Winnipeg, Canada) are presented.
Abstract: Abstruct - This paper presents details on the design, architectural features and applications of a real-time digital simulator (RTDSm) developed at the Manitoba HVDC Research Centre (Winnipeg, Canada). Custom hardware and software have been developed and collectively applied to the simulation and study of electromagnetic transients phenomenon in power systems in real-time. The combination of real-time operation, flexible YO, graphical user interface and an extensive library of accurate power system component models make the RTDS an ideal simulation tool with a wide range of applications. I. INTRODUCTION Simulation has long been recognized as an important and necessary step in the development, design and testing of power generation and transmission systems. A wide variety of both analogue and digital simulation tools are available and typically used during various stages of system development. Recent advances in both computing hardware and sophisticated power system component modelling techniques have significantly increased the application of digital simulation in the power system industry. Of particular interest in the context of this paper are the advances made in the study of electromagnetic transients phenomenon.

233 citations


Journal ArticleDOI
TL;DR: A pilot study is described that used virtual reality graded exposure techniques to treat acrophobia-the fear of heights, and the extent to which subjects feel that they were actually present in height situations is addressed.
Abstract: Can virtual environments help elicit fearful feelings so they can be treated? This article shows how therapists and computer experts used them to do just that. We describe a pilot study that used virtual reality graded exposure techniques to treat acrophobia-the fear of heights. We specifically address two issues: the extent to which we were able to make subjects feel that they were actually present in height situations, and the efficacy of the treatment conducted using virtual height situations. >

230 citations


Proceedings ArticleDOI
08 Dec 1995
TL;DR: This paper describes the input-output requirements of three scalable parallel applications (electron scattering, terrain rendering, and quantum chemistry, on the Intel Paragon XP/S) and describes the broad spectrum of access patterns, including highly read-intensive and write-intensive phases, extremely large and extremely small request sizes, and both sequential and highly irregular access patterns.
Abstract: Rapid increases in computing and communication performance are exacerbating the long-standing problem of performance-limited input/output. Indeed, for many otherwise scalable parallel applications. input/output is emerging as a major performance bottleneck. The design of scalable input/output systems depends critically on the input/output requirements and access patterns for this emerging class of large-scale parallel applications. However, hard data on the behavior of such applications is only now becoming available. In this paper, we describe the input-output requirements of three scalable parallel applications (electron scattering, terrain rendering, and quantum chemistry, on the Intel Paragon XP/S. As part of an ongoing parallel input/output characterization effort, we used instrumented versions of the application codes to capture and analyze input/output volume, request size distributions, and temporal request structure. Because complete traces of individual application input/output requests were captured, in-depth, off-line analyses were possible. In addition, we conducted informal interviews of the application developers to understand the relation between the codes' current and desired input/output structure. The results of our studies show a wide variety of temporal and spatial access patterns, including highly read-intensive and write-intensive phases, extremely large and extremely small request sizes, and both sequential and highly irregular access patterns. We conclude with a discussion of the broad spectrum of access patterns and their profound implications for parallel file caching and prefetching schemes.

198 citations


Journal ArticleDOI
TL;DR: The authors explored the utility of custom computing machinery for accelerating the development, testing, and prototyping of a diverse set of image processing applications and developed a real time image processing system called VTSplash, based on the Splash-2 general-purpose platform.
Abstract: The authors explore the utility of custom computing machinery for accelerating the development, testing, and prototyping of a diverse set of image processing applications. We chose an experimental custom computing platform called Splash-2 to investigate this approach to prototyping real time image processing designs. Custom computing platforms are emerging as a class of computers that can provide near application specific computational performance. We developed a real time image processing system called VTSplash, based on the Splash-2 general-purpose platform. Splash-2 is an attached processor featuring programmable processing elements (PEs) and communication paths. The Splash-2 system uses arrays of RAM based field programmable gate arrays (FPGAs), crossbar networks, and distributed memory to accomplish the needed flexibility and performance tasks. Such platforms let designers customize specific operations for function and size, and data paths for individual applications. >

156 citations


Journal ArticleDOI
TL;DR: The general technical concepts underlying compound documents and component software, which promise to simplify the design and implementation of complex software applications and, equally important, simplify human-computer interactive work models for application end users are examined.
Abstract: Component software benefits include reusability and interoperability, among others. What are the similarities and differences between the competing standards for this new technology, and how will they interoperate? Object-oriented technology is steadily gaining acceptance for commercial and custom application development through programming languages such as C++ and Smalltalk, object oriented CASE tools, databases, and operating systems such as Next Computer's NextStep. Two emerging technologies, called compound documents and component software, will likely accelerate the spread of objectoriented concepts across system-level services, development tools, and application-level behaviours. Tied closely to the popular client/server architecture for distributed computing, compound documents and component software define object-based models that facilitate interactions between independent programs. These new approaches promise to simplify the design and implementation of complex software applications and, equally important, simplify human-computer interactive work models for application end users. Following unfortunate tradition, major software vendors have developed competing standards to support and drive compound document and component software technologies. These incompatible standards specify distinct object models, data storage models, and application interaction protocols. The incompatibilities have generated confusion in the market, as independent software vendors, system integrators, in-house developers, and end users struggle to sort out the standards' relative merits, weaknesses, and chances for commercial success. Let's take a look now at the general technical concepts underlying compound documents and component software. Then we examine the OpenDoc, OLE 2, COM, and CORBA standards being proposed for these two technologies. Finally, we'll review the work being done to extend the standards and to achieve interoperability across them. >

Patent
12 Oct 1995
TL;DR: A computerized, multimedia tutorial interface system and method for training a user to use computer application software is presented in this paper, which incorporates the training techniques of video segments, on-line tutorials, written instruction, and learning-by-doing lessons.
Abstract: A computerized, multimedia tutorial interface system (10) and method for training a user to use computer application software The system incorporates the training techniques of video segments, on-line tutorials, written instruction, and learning-by-doing lessons The system and method incorporate the video segments into the system so that they may be displayed on a computer screen (26) User input is given by way of a mouse (22), keyboard (30), or by voice through an audio interface (34) Once the video clip is displayed on a video window (55), the system preferably runs a set of instructions within the computer application software to demonstrate the exact sequence of instructions that were discussed in the video clip Once this is completed, written instruction is provided and the user is then given an opportunity to execute the same functions as previously described and executed by the system In this fashion, lesson content is multiply reinforced The system may also include user monitoring to ensure that the user correctly enters the instructions as well as to monitor the progress the user is making in his or her training Preferred applications of the system and method of the present invention include application software, on-line services, and other complicated computer software systems

Proceedings ArticleDOI
21 Nov 1995
TL;DR: In this paper, the authors present details on the design, architectural features and applications of a real-time digital simulator (RTDS) developed at the Manitoba HVDC Research Centre (Winnipeg, Canada).
Abstract: This paper presents details on the design, architectural features and applications of a real-time digital simulator (RTDS) developed at the Manitoba HVDC Research Centre (Winnipeg, Canada). Custom hardware and software have been developed and collectively applied to the simulation and study of electromagnetic transients phenomenon in power systems in real-time. The combination of real-time operation, flexible I/O, graphical user interface and an extensive library of accurate power system component models make the RTDS an ideal simulation tool with a wide range of applications.

Proceedings ArticleDOI
04 May 1995
TL;DR: These investigations are being conducted in the context of the Rialto operating system-an object-based real-time kernel and programming environment currently being developed within Microsoft Research to allow multiple independent real- time programs to dynamically coexist and share resources on the same hardware platforms.
Abstract: This paper describes ongoing investigations into algorithms for modular distributed real-time resource management. These investigations are being conducted in the context of the Rialto operating system-an object-based real-time kernel and programming environment currently being developed within Microsoft Research. Some of the goals of this research include developing appropriate real-time programming abstractions to allow multiple independent real-time programs to dynamically coexist and share resources on the same hardware platforms. Use of these abstractions is intended both to allow individual applications to reason about their own resource requirements and for per-machine system resource planner applications to reason about and control resource allocations between potentially competing applications. The set of resources being managed is dynamically extensible, and may include remote resources in distributed environments. The local planner conducts resource negotiations with individual applications on behalf of the user, with the goal of maximizing the user's perceived utility of the set of running applications with respect to resource allocations for those applications.

Journal ArticleDOI
TL;DR: This paper focuses on three novel aspects in the design and implementation of CCL: the introduction of process groups, the definition of semantics that ensures correctness, and the design of new and tunable algorithms based on a realistic point-to-point communication model.
Abstract: A collective communication library for parallel computers includes frequently used operations such as broadcast, reduce, scatter, gather, concatenate, synchronize, and shift. Such a library provides users with a convenient programming interface, efficient communication operations, and the advantage of portability. A library of this nature, the Collective Communication Library (CCL), intended for the line of scalable parallel computer products by IBM, has been designed. CCL is part of the parallel application programming interface of the recently announced IBM 9076 Scalable POWERparallel System 1 (SP1). In this paper, we examine several issues related to the functionality, correctness, and performance of a portable collective communication library while focusing on three novel aspects in the design and implementation of CCL: 1) the introduction of process groups, 2) the definition of semantics that ensures correctness, and 3) the design of new and tunable algorithms based on a realistic point-to-point communication model. >

Proceedings ArticleDOI
01 Jan 1995
TL;DR: A software generation methodology that takes advantage of the very restricted class of specifications and allows for tight control over the implementation cost, and exploits several techniques from the domain of Boolean function optimization is proposed.
Abstract: Software components for embedded reactive real-time applications must satisfy tight code size and run-time constraints. Cooperating Finite State Machines provide a convenient intermediate format for embedded system co-synthesis, between high-level specification languages and software or hardware implementations. We propose a software generation methodology that takes advantage of the very restricted class of specifications and allows for tight control over the implementation cost. The methodology exploits several techniques from the domain of Boolean function optimization. We also describe how the simplified control/data-flow graph used as an intermediate representation can be used to accurately estimate the size and timing cost of the final executable code.

Journal ArticleDOI
01 May 1995
TL;DR: The dynamic flowgraph methodology (DFM) is an integrated methodological approach to modeling and analyzing the behavior of software-driven embedded systems for the purpose of reliability/safety assessment and verification.
Abstract: The dynamic flowgraph methodology (DFM) is an integrated methodological approach to modeling and analyzing the behavior of software-driven embedded systems for the purpose of reliability/safety assessment and verification. The methodology has two fundamental goals: (1) to identify how certain postulated events may occur in a system; and (2) to identify an appropriate testing strategy based on an analysis of system functional behavior. To achieve these goals, the methodology employs a modeling framework in which system models are developed in terms of causal relationships between physical variables and temporal characteristics of the execution of software modules. These models are then analyzed to determine how a certain state (desirable or undesirable) can be reached. This is done by developing timed fault trees which take the form of logical combinations of static trees relating system parameters at different points in time. The prime implicants (multi-state analogue of minimal cut sets) of the fault trees can be used to identify and eliminate system faults resulting from unanticipated combinations of software logic errors, hardware failures and adverse environmental conditions, and to direct testing activity to more efficiently eliminate implementation errors by focusing on the neighborhood of potential failure modes arising from these combinations of system conditions. >

01 Jan 1995
TL;DR: Three reusable software components that provide up to the third level of software fault tolerance in the application layer are described that have been ported to a number of UNIX 2 platforms and can be used in any application with minimal programming effort.
Abstract: By software fault tolerance in the application layer, we mean a set of application level software components to detect and recover from faults that are not handled in the hardware or operating system layers of a computer system. We consider those faults that cause an application process to crash or hang; they include application software faults as well as faults in the underlying hardware and operating system layers if they are undetected in those layers. We dene four levels of software fault tolerance based on availability and data consistency of an application in the presence of such faults. We describe three reusable software components that provide up to the third level of software fault tolerance. Those components perform automatic detection and restart of failed processes, periodic checkpointing and recovery of critical volatile data, and replication and synchronization of persistent data in an application software system. These components have been ported to a number of UNIX 2 platforms and can be used in any application with minimal programming effort. Some telecommunications products in AT&T have already been enhanced for faulttolerance capability using these three components. Experience with those products to date indicates that these modules provide efcient and economical means to increase the level of fault tolerance in a software system. The performance overhead due to these components depends on the level and varies from 0.1% to 14% based on the amount of critical data being checkpointed and replicated.

Proceedings ArticleDOI
06 Nov 1995
TL;DR: The Multigraph Architecture (MGA) is a meta-level architecture which includes tools and methods to create domain specific model integrated program synthesis environments that support the integrated modeling of systems independently from their implementation.
Abstract: The design, implementation and deployment of computer applications tightly integrated with complex, changing environments is a difficult task. This paper presents the Multigraph Architecture (MGA) developed for building complex embedded systems. The MGA is a meta-level architecture which includes tools and methods to create domain specific model integrated program synthesis environments. These environments support the integrated modeling of systems independently from their implementation, include tools for model analysis and application specific model interpreters for the synthesis of executable programs.

Patent
16 Jun 1995
TL;DR: In this paper, the authors present a translation software that provides remote access to an application program that is executing on a host machine in its native operating system environment by monitoring and converting messages in this fashion, the translation software allows the application program to be displayed remotely.
Abstract: The present invention is directed towards a translation software that provides remote access to an application program that is executing on a host machine in its native operating system environment. The translation software monitors messages that are relayed from the application program to an application interface that is provided via the native operating system. Upon recognizing a message that affects a graphical user interface of the native operating system, the translation software converts the message into a protocol that is recognized by a remote graphical user interface. By monitoring and converting messages in this fashion, the translation software allows the application program to be displayed remotely.

Journal ArticleDOI
01 Mar 1995
TL;DR: The motivations for fuzzy logic control (FLC) are illuminated by exploring the benefits obtained by application designers through its use, and application preconditions for obtaining each benefit are stated.
Abstract: The motivations for fuzzy logic control (FLC) are illuminated by exploring the benefits obtained by application designers through its use. A context for this exploration is set with a discussion of the characteristics of control policies and of the general attributes of FLC. Each benefit is described by reference to reported FLC implementations in which the benefit is demonstrated. Based on common features of the example applications, application preconditions for obtaining each benefit are stated. >

Proceedings ArticleDOI
13 Sep 1995
TL;DR: The interface synthesis approach describes the basic transformations needed to transform the server interface description into an interface description on the client side of the communication medium.
Abstract: Presents a novel interface synthesis approach based on a one-sided interface description. Whereas most other approaches consider interface synthesis as optimizing a channel to existing client/server modules, we consider the interface synthesis as part of the client/server module synthesis (which may contain the re-use of existing modules). The interface synthesis approach describes the basic transformations needed to transform the server interface description into an interface description on the client side of the communication medium. The synthesis approach is illustrated through a point-to-point communication, but is applicable to synthesis of a multiple client/server environment. The interface description is based on a formalization of communication events.

Proceedings ArticleDOI
23 Apr 1995
TL;DR: It is argued that the evaluation of a method must take into account not only academic concerns, but also the maturity of the method, its compatibility with the existing software development process and system execution environment, and its suitability for the chosen application domain.
Abstract: Numerous formal specification methods for reactive systems have been proposed in the literature. Because the significant differences bet ween the methods are hard to determine, choosing the best method for a particular application can be difficult. We have applied several different methods, including Modechart, VFSM, ESTEREL, Basic LOTOS, Z, SDL and C, to an application problem encountered in the design of software for AT&T's 5ESS® telephone switching system. We have developed a set of criteria for evaluating and comparing the different specification methods. We argue that the evaluation of a method must take into account not only academic concerns, but also the maturity of the method, its compatibility with the existing software development process and system execution environment, and its suitability for the chosen application domain.

Proceedings ArticleDOI
20 Sep 1995
TL;DR: It is concluded that persistent connection is a convenient communication abstraction for reliable, adaptable, and reconfigurable applications.
Abstract: This paper describes a mechanism, called "persistent connection" to preserve stream connections after the communicating peer exits and till it restarts Such connections have many applications: to survive failures that crash one party, network partitions that cut off the two parties, and temporary disconnection in a mobile computing environment They can also facilitate suspension of process execution in a limited resource environment and maintain connectivity when one party migrates from one machine to another Persistent connection uses logical endpoints to hide disconnection from applications and to achieve location independence It can be constructed from the normal "transient" connection that goes down with processes Prototypes have been developed on Unix to provide persistent connections in both the TCP-socket level and the DCE RPC level Many existing programs can benefit from this software to achieve transparence to disconnection and relocation We conclude that persistent connection is a convenient communication abstraction for reliable, adaptable, and reconfigurable applications

Proceedings ArticleDOI
13 Sep 1995
TL;DR: A gradient-search algorithm is developed which co-synthesizes heterogeneous distributed systems of arbitrary topology and the associated application software architecture and proposes a priority prediction method to schedule processes.
Abstract: Describes a new, sensitivity-driven algorithm for the co-synthesis of real-time distributed embedded systems. Many embedded computing systems are distributed systems: communicating periodic processes executing on several CPUs/ASICs connected by communication links. We use performance estimates to compute a local sensitivity of the design to process allocation. We propose a priority prediction method to schedule processes. Based on these techniques, we develop a gradient-search algorithm which co-synthesizes heterogeneous distributed systems of arbitrary topology and the associated application software architecture. Experimental results show that our algorithm can find good implementation architectures in small amounts of CPU time.

Proceedings ArticleDOI
27 Jun 1995

Proceedings ArticleDOI
25 Oct 1995
TL;DR: In this paper, the authors describe an architecture for the runtime environment for parallel applications as prelude to describing how parallel applications might interface to their environment in a portable way, and propose extensions to the MPI Standard that provide for dynamic process management, including spawning of new processes by a running application and connection to existing processes to support client/server applications.
Abstract: We describe an architecture for the runtime environment for parallel applications as prelude to describing how parallel applications might interface to their environment in a portable way. We propose extensions to the Message-Passing Interface (MPI) Standard that provide for dynamic process management, including spawning of new processes by a running application and connection to existing processes to support client/server applications. Such extensions are needed if more of the runtime environment for parallel programs is to be accessible to MPI programs or to be themselves written using MPI. The extensions proposed here are motivated by real applications and fit cleanly with existing concepts of MPI. No changes to the existing MPI Standard are proposed, thus all present MPI programs will run unchanged.

Proceedings ArticleDOI
25 Sep 1995
TL;DR: An architecture for a highly dependable real-time computer network suited for cyclic time-deterministic applications typically found in embedded control systems and directed towards the requirements of safety-critical automotive control systems is presented.
Abstract: An architecture for a highly dependable real-time computer network is presented. The architecture and communication protocol are suited for cyclic time-deterministic applications typically found in embedded control systems. Particular attention has been directed towards the requirements of safety-critical automotive control systems. The scheduling of both network communication and application processes is determined at compile time and is thus completely deterministic. A DACAPO system consists of a number of nodes communicating over two serial buses. Each node is composed of two sets of functionally identical fail-silent units, thus providing tolerance against any single fault. A high degree of error detection coverage and the tolerance towards transient faults inherently associated with cyclic operation combine to yield an architecture with a very high safety level.

Proceedings ArticleDOI
05 Aug 1995
TL;DR: The focus of this paper is how to apply the Chimera Methodology specifically to the development of robotic applications.
Abstract: Component-based real-time software speeds development and lowers cost of robotics applications. It enables the use of rapid prototyping or incremental software process models. The Chimera Methodology is a software engineering paradigm targeted at developing and integrating dynamically reconfigurable and reusable real-time software components. It is founded upon the notion of port-based objects. The focus of this paper is how to apply the Chimera Methodology specifically to the development of robotic applications.

Patent
Kayhan Kucukcakar1
05 Jun 1995
TL;DR: In this paper, a compiler is used to generate a host microprocessor code from a portion of an application software code and a coprocessor code from the portion of the software code, then the compiler creates a code that serves as the software program.
Abstract: A computing system (10) and a method for designing the computing system (10) using hardware and software components. The computing system (10) includes programmable coprocessors (12, 13) having the same architectural style. Each coprocessor includes a sequencer (36) and a programmable interconnect network (34) and a varying number of functional units and storage elements. The computing system (10) is designed by using a compiler (71) to generate a host microprocessor code from a portion of an application software code and a coprocessor code from the portion of the application software code. The compiler (71) uses the host microprocessor code to determine the execution speed of the host microprocessor and the coprocessor code to determine the execution speed of the coprocessor and selects one of the host microprocessor or the coprocessor for execution of the portion of the application software code. Then the compiler (71) creates a code that serves as the software program.