scispace - formally typeset
Search or ask a question

Showing papers on "Application software published in 1998"


Journal ArticleDOI
TL;DR: Experimental, results are presented to demonstrate the accuracy and potential of Xception in the evaluation of the dependability properties of the complex computer systems available nowadays.
Abstract: An important step in the development of dependable systems is the validation of their fault tolerance properties Fault injection has been widely used for this purpose, however with the rapid increase in processor complexity, traditional techniques are also increasingly more difficult to apply This paper presents a new software-implemented fault injection and monitoring environment, called Xception, which is targeted at modern and complex processors Xception uses the advanced debugging and performance monitoring features existing in most modern processors to inject quite realistic faults by software, and to monitor the activation of the faults and their impact on the target system behavior in detail Faults are injected with minimum interference with the target application The target application is not modified, no software traps are inserted, and it is not necessary to execute the target application in special trace mode (the application is executed at full speed) Xception provides a comprehensive set of fault triggers, including spatial and temporal fault triggers, and triggers related to the manipulation of data in memory Faults injected by Xception can affect any process running on the target system (including the kernel), and it is possible to inject faults in applications for which the source code is not available Experimental, results are presented to demonstrate the accuracy and potential of Xception in the evaluation of the dependability properties of the complex computer systems available nowadays

393 citations


Patent
21 Sep 1998
Abstract: An extended functionality remote control (EFRC) provides a hardware/software implementation of an integrated interface for remote control emulation. A PDA or other portable computing device is used as a platform for the EFRC application software and peripheral hardware. The EFRC also merges information services into remote controls. Implementation of these information services takes the form of, e.g., electronic program guides (EPGs) merged with the functioning of remote controls. In addition to the portable computing device, the hardware portion of the invention according to a preferred embodiment includes a keypad and an infrared transmitter subsystem which are managed by a microcontroller. The microcontroller also exchanges data with application software of the computing device via a serial communications link. The EFRC according to a preferred embodiment provides a universal remote control which can remain current by shifting the remote control code functions of a universal remote control from to software. New codes may be made available through the internet and downloaded into an application which can utilize these codes for the targeted consumer electronics device. The preferred embodiment provides the user the ability to select the components of their specific consumer electronic device on the World Wide Web, leading to the download of a data file by the user with all remote control code information preprogrammed into this data file. The preferred embodiment can dynamically construct the user's remote control buttons on a graphical touch screen, from information contained within the downloaded data file.

327 citations


Proceedings ArticleDOI
30 Mar 1998
TL;DR: The SmartNet resource scheduling system is described and compared to two different resource allocation strategies: load balancing and user directed assignment, and results indicate that, for the computer environments simulated, SmartNet outperforms both load balancingand user directed assignments, based on the maximum time users must wait for their tasks to finish.
Abstract: It is increasingly common for computer users to have access to several computers on a network, and hence to be able to execute many of their tasks on any of several computers. The choice of which computers execute which tasks is commonly determined by users based on a knowledge of computer speeds for each task and the current load on each computer. A number of task scheduling systems have been developed that balance the load of the computers on the network, but such systems tend to minimize the idle time of the computers rather than minimize the idle time of the users. The paper focuses on the benefits that can be achieved when the scheduling system considers both the computer availabilities and the performance of each task on each computer. The SmartNet resource scheduling system is described and compared to two different resource allocation strategies: load balancing and user directed assignment. Results are presented where the operation of hundreds of different networks of computers running thousands of different mixes of tasks are simulated in a batch environment. These results indicate that, for the computer environments simulated, SmartNet outperforms both load balancing and user directed assignments, based on the maximum time users must wait for their tasks to finish.

316 citations


Proceedings ArticleDOI
04 Nov 1998
TL;DR: A distributed data collection tool used to collect operating system resource usage and system activity data at regular intervals, from networked UNIX workstations and proposes a metric: "estimated time to exhaustion", which is calculated using well known slope estimation techniques.
Abstract: The phenomenon of software aging refers to the accumulation of errors during the execution of the software which eventually results in it's crash/hang failure. A gradual performance degradation may also accompany software aging. Pro-active fault management techniques such as "software rejuvenation" (Y. Huang et al., 1995) may be used to counteract aging if it exists. We propose a methodology for detection and estimation of aging in the UNIX operating system. First, we present the design and implementation of an SNMP based, distributed monitoring tool used to collect operating system resource usage and system activity data at regular intervals, from networked UNIX workstations. Statistical trend detection techniques are applied to this data to detect/validate the existence of aging. For quantifying the effect of aging in operating system resources, we propose a metric: "estimated time to exhaustion", which is calculated using well known slope estimation techniques. Although the distributed data collection tool is specific to UNIX, the statistical techniques can be used for detection and estimation of aging in other software as well.

298 citations


Patent
15 Sep 1998
TL;DR: A system and method for computer assisted database management software creation of a target software application from a description known as dictionary (106) interoperating with universal software application (108) Dictionary provides contents customize universal application(108) into target software applications created from a high-level dialog between an application designer and graphical application editor (104) Application editor provides an environment for editing and creating custom applications and automatically creates security partitioning of responsibilities and users, hierarchical menu structures, groupings of database data elements into efficient sets, database transactions and database partitioning without requiring programming in SQL language by an application
Abstract: A system and method for computer-assisted database management software creation of a target software application from a description known as dictionary (106) interoperating with universal software application (108) Dictionary (106) contents customize universal application (108) into target software application (100) created from a high-level dialog between an application designer and graphical application editor (104) Application editor (104) provides an environment for editing and creating custom applications and automatically creates security partitioning of responsibilities and users, hierarchical menu structures, groupings of database data elements into efficient sets, database transactions and database partitioning without requiring programming in SQL language by an application designer The computer stores dictionary (106) in a database for accessing by universal application (108) Dictionary (106) customizes re-usable universal application (108) for interaction with relational databases such as Oracle®, IBM® DB2, and Sybase®

274 citations


Patent
26 May 1998
TL;DR: In this paper, the authors describe a system that includes a system server and a network supporting multiple computer processors, which are coupled by way of the network to physical devices of the system.
Abstract: A computer is used to manage communication over a network between one or more network addressable units and a plurality of physical devices of a passenger entertainment system. The system is configured and operated using software to provide passenger entertainment services including audio and video on-demand, information dissemination, product and service order processing, video teleconferencing and data communication services. The system includes a system server and a network supporting multiple computer processors. The processors and the server comprise application software that control telephony applications and network services. The server is coupled by way of the network to physical devices of the system. The server comprises software that instantiates a network addressable unit server that interfaces to one or more network addressable units, that instantiates a services server that interfaces to one or more service clients that provide services of the passenger entertainment system, and that instantiates a router and one or more mail slots comprising a lookup table that identify each of the clients. Data comprising a network routing address and a physical device type are used to access the lookup table to determine message destinations. The respective servers interface to their clients by way of named pipes that translate messages from a first format to a second format. The server also comprises software that instantiates intranodal thread processors that route messages between processes on the physical devices and the one or more service clients to route services of the passenger entertainment system to the processes on the physical devices.

238 citations


Proceedings ArticleDOI
01 Apr 1998
TL;DR: This work presents an in-depth evaluation of several mobile code design paradigms against the traditional client-server architecture, within the application domain of network management.
Abstract: The question of whether technologies supporting mobile code are bringing significant benefits to the design and implementation of distributed applications is still an open one. Even more difficult is to identify precisely under which conditions a design exploiting mobile code is preferable over a traditional one. In this work, we present an in-depth evaluation of several mobile code design paradigms against the traditional client-server architecture, within the application domain of network management. The evaluation is centered around a quantitative model, which is used to determine precisely the conditions for the selection of a design paradigm minimizing the network traffic related to management.

173 citations


Journal ArticleDOI
TL;DR: A method to evaluate user interfaces using task models and logs generated from a user test of an application is proposed and can be incorporated into an automatic tool which gives the designer information useful to evaluate and improve the user interface.
Abstract: The main goal of the work is to propose a method to evaluate user interfaces using task models and logs generated from a user test of an application. The method can be incorporated into an automatic tool which gives the designer information useful to evaluate and improve the user interface. These results include an analysis of the tasks which have been accomplished, those which failed and those never tried, user errors and their type, time related information, task patterns among the accomplished tasks, and the available tasks from the current state of the user session. This information is also useful to an evaluator checking whether the specified usability goals have been accomplished.

170 citations


Journal ArticleDOI
TL;DR: The authors describe Vector IRAM, an initial approach in this direction, and challenge others in the very successful computer architecture community to investigate architectures with a heavy bias toward the past for the future.
Abstract: In the past few years, two important trends have evolved that could change the shape of computing: multimedia applications and portable electronics. Together, these trends will lead to a personal mobile-computing environment, a small device carried all the time that incorporates the functions of the pager, cellular phone, laptop computer, PDA, digital camera, and video game. The microprocessor needed for these devices is actually a merged general-purpose processor and digital-signal processor, with the power budget of the latter. Yet for almost two decades, architecture research has focused on desktop or server machines. We are designing processors of the future with a heavy bias toward the past. To design successful processor architectures for the future, we first need to explore future applications and match their requirements in a scalable, cost-effective way. The authors describe Vector IRAM, an initial approach in this direction, and challenge others in the very successful computer architecture community to investigate architectures with a heavy bias for the future.

144 citations


Proceedings ArticleDOI
23 Jun 1998
TL;DR: A detailed description of the MSCS architecture and the design decisions that have driven the implementation of the service are provided, and features added to make it easier to implement and manage fault-tolerant applications on M SCS are described.
Abstract: Microsoft Cluster Service (MSCS) extends the Windows NT operating system to support high-availability services. The goal is to offer an execution environment where off-the-shelf server applications can continue to operate, even in the presence of node failures. Later versions of MSCS will provide scalability via a node and application management system which allows applications to scale to hundreds of nodes. In this paper we provide a detailed description of the MSCS architecture and the design decisions that have driven the implementation of the service. The paper also describes how some major applications use the MSCS features, and describes features added to make it easier to implement and manage fault-tolerant applications on MSCS.

126 citations


Proceedings ArticleDOI
21 Jun 1998
TL;DR: The Simplex architecture is described, a real-time software technology which supports the safe, reliable introduction of control system upgrades while the system is running, and its basic structure in control systems is introduced.
Abstract: We describe the Simplex architecture, a real-time software technology which supports the safe, reliable introduction of control system upgrades while the system is running. We introduce its basic structure in control systems, discuss its fault-tolerance feature, and investigate the control issues when the technology is employed. Application of the Simplex architecture is demonstrated for a plasma-enhanced chemical vapor deposition (PECVD) system, a standard process in semiconductor manufacturing. We conclude the paper with a discussion of the potential impact that the Simplex architecture can make on future control applications.

Proceedings ArticleDOI
03 May 1998
TL;DR: Results from analyzing the vulnerability of security-critical software applications to malicious threats and anomalous events using an automated fault injection analysis approach are presented.
Abstract: The paper presents results from analyzing the vulnerability of security-critical software applications to malicious threats and anomalous events using an automated fault injection analysis approach. The work is based on the well understood premise that a large proportion of security violations result from errors in software source code and configuration. The methodology employs software fault injection to force anomalous program states during the execution of software and observes their corresponding effects on system security. If insecure behaviour is detected, the perturbed location that resulted in the violation is isolated for further analysis and possibly retrofitting with fault tolerant mechanisms.

Proceedings ArticleDOI
B. Srinivasan1, S. Pather1, R. Hill1, F. Ansari1, D. Niehaus1 
03 Jun 1998
TL;DR: The authors have developed the ATM Reference Traffic System (ARTS), a firm real-time system capable of recording and accurately reproducing packet-level ATM traffic streams with timing resolution in microseconds.
Abstract: The emergence of multimedia and high-speed networks has expanded the class of applications that combine the timing requirements of hard real-time applications with the need for operating system services typically available only on soft-real time or time-sharing systems. These applications, which the authors describe as firm real-time, currently have no widely-available, low-cost operating system to support them. They discuss modifications they have made to the popular Linux operating system that give it the ability to support the comparatively stringent timing requirements of these applications, while still giving them access to the full range of Linux services. Using their firm real-time system as a basis, they have developed the ATM Reference Traffic System (ARTS) that is capable of recording and accurately reproducing packet-level ATM traffic streams with timing resolution in microseconds. The effectiveness of this application, as well as the comparative ease with which it was developed illustrate the performance and utility of the system.

Journal ArticleDOI
TL;DR: In this article, the authors argue that to achieve the goal of widespread component-based engineering, the industry must overcome challenges related to safety, reliability, and security, if the industry cannot adequately address these problems, the goal may remain unmet.
Abstract: An increasing number of organizations are using software applications of larger applications. In this new role, acquired software must integrate with other software functionality. In the introduction to the cover features, the author describes why the industry is moving toward a software design paradigm in which many of the needed software functions already exist. The developer's task, then, becomes one of accurately selecting functions and integrating them into a system. The problem is that commercial, off-the- shelf (COTS) software is almost always delivered in a black box with restrictions that keep developers from looking inside. Therefore, most forms of software analysis that would help developers decide if the software is going to perform safely, securely, and reliably are not available. Developers are thus at the mercy of the software vendor in many ways. The author argues that to achieve the goal of widespread component-based engineering, the industry must overcome challenges related to safety, reliability, and security. If the industry cannot adequately address these problems, the goal may remain unmet.

Proceedings ArticleDOI
15 Feb 1998
TL;DR: An infrastructure that is required for a new approach to network management utilizing mobile code, which involves providing a framework for code mobility, access to managed resources and communication between agents is introduced.
Abstract: The research is part of the Perpetuum project that makes use of mobile agents for network management. In this paper, we introduce an infrastructure that is required for a new approach to network management utilizing mobile code. This involves providing a framework for code mobility, access to managed resources and communication between agents. The infrastructure is the fundamental part that provides the base upon which our research of applications of mobile code technology is built. The infrastructure is built on Java. Java addresses several critical issues, such as security, portability, persistent state through serialization, networking, and other features. That's why Java was selected for the development of the infrastructure. We also present some examples of infrastructure application to demonstrate the advantages of the use of mobile agents for network management.

Proceedings ArticleDOI
28 Jul 1998
TL;DR: The process used to develop the applications, the lessons learned and conclusions regarding the effectiveness of the Globus toolkit approach are described.
Abstract: The development of applications and tools for high-performance "computational grids" is complicated by the heterogeneity and frequently dynamic behavior of the underlying resources; by the complexity of the applications themselves, which often combine aspects of supercomputing and distributed computing; and by the need to achieve high levels of performance. The Globus toolkit has been developed with the goal of simplifying this application development task, by providing implementations of various core services deemed essential for high-performance distributed computing. In this paper, we describe two large applications developed with this toolkit: a distributed interactive simulation and a teleimmersion system. We describe the process used to develop the applications, review the lessons learned and draw conclusions regarding the effectiveness of the toolkit approach.

Patent
26 May 1998
TL;DR: In this article, the authors describe a system that is configured and operated using software to provide passenger entertainment services including audio and video on-demand, information dissemination, product and service order processing, video teleconferencing and data communication services.
Abstract: A computer is used to manage communication over a network between one or more network addressable units and a plurality of physical devices of a passenger entertainment system. The system is configured and operated using software to provide passenger entertainment services including audio and video on-demand, information dissemination, product and service order processing, video teleconferencing and data communication services. The system includes a system server and a network supporting multiple computer processors. The processors and the server comprise application software that control telephony applications and network services. The server is coupled by way of the network to physical devices of the system. The server comprises software for instantiating a dispatch object to open a framework for one or more network addressable unit objects, for instantiating one or more virtual line replaceable unit objects to manage communication between a network address unit and one or more physical devices, and for instantiating a message processor for moving messages to the one or more network addressable unit objects for delivery to the one or more physical devices. The message processor receives messages containing network routing address from one or more device drivers. The message processor utilizes the network routing address to and a physical device type to access a table and determine the ultimate destination for the message. The message processor has at least one input named pipe and one output named pipe, and utilizes the named pipes to translate messages from a first format to a second format. The message processor logs invalid destination addresses in a storage medium. The message processor instantiates each device driver from a device handler class member.

Patent
11 Mar 1998
TL;DR: In this article, an agent accessory tool which enables integration of a web-type application operating within a browser area and general application operating outside browser area is provided, which includes an agent program interlocking by HTTP incorporated in a personal computer (hereinafter referred to as PC) of each of a plurality of clients.
Abstract: An agent accessory tool which enables integration of a Web-type application operating within a browser area and general application operating outside browser area is provided. The agent accessory tool includes an agent program interlocking by HTTP incorporated in a personal computer (hereinafter referred to as PC) of each of a plurality of clients, and a Web server having a CGI interface for executing communication software and an external application of the HTTP concerned to each client PC through a communication line. In this agent accessory tool interlocking with the integrated application on the Web server by the HTTP, the agent program accesses various data of a CGI program through the Web server under input conditions, and music and images to each client. When, as a result, the data are updated from previous access data or in conformity to predetermined conditions, an accessory tool including an avatar (digital actor) is caused to appear on the display of the PC of the client, and is also caused to conduct a predetermined action/reaction so as to transmit the existence of information, the non-conformity to the predetermined conditions, and music and images to each client. As a result, the agent accessory tool operates integrally with the Web application by accessing the Web server, without booting the browser software.

Journal ArticleDOI
TL;DR: This is one of the first approaches that considers globally HW and SW contributions to power in a system-level design flow for control dominated embedded systems.
Abstract: The need for low-power embedded systems has become very significant within the microelectronics scenario in the most recent years. A power-driven methodology is mandatory during embedded systems design to meet system-level requirements while fulfilling time-to-market. The aim of this paper is to introduce accurate and efficient power metrics included in a hardware/software (HW/SW) codesign environment to guide the system-level partitioning. Power evaluation metrics have been defined to widely explore the architectural design space at high abstraction level. This is one of the first approaches that considers globally HW and SW contributions to power in a system-level design flow for control dominated embedded systems.

Journal ArticleDOI
TL;DR: This paper is a comprehensive examination of the impact of software piracy worldwide and three main messages emerge using data collected from a variety of sources and analyzed using strict research methodolog.
Abstract: This paper is a comprehensive examination of the impact of software piracy worldwide. Software piracy is defined by the American Software Publishers Association as the unauthorized duplication of computer software. Completed in April 1997, this survey examines business application software piracy in 1996. Three main messages emerge using data collected from a variety of sources and analyzed using strict research methodolog. Firstly, business application software piracy cost the industry $11.3 billion in 1996. Secondly, continued growth of the worldwide software industry is being retarded by piracy. Thirdly, governments worldwide must do more to combat piracy . 1997 piracy data have become available since this paper was submitted. 1998 data are expected in May 1999.

Proceedings ArticleDOI
01 Apr 1998
TL;DR: A new software process model, ASP (Agile Software Process), is proposed and its experience in large-scale software development is discussed and it aims at quick delivery of software products by integrating the lightweight processes, modular process structures and incremental and iterative process enaction.
Abstract: This article proposes a new software process model, ASP (Agile Software Process) and discusses its experience in large-scale software development. The Japanese software factory was a successful model in the development of quality software for large-scale business applications in the 80s. However, the requirements for software development have dramatically changed. Development cycle-time has been promoted to one of the top goals of software development in the 90s. Unlike conventional software process models based on volume, the ASP is a time-based process model which aims at quick delivery of software products by integrating the lightweight processes, modular process structures and incremental and iterative process enaction. The major contributions of APS include: a new process model and its enaction mechanism based on time; a software process model for evolutional delivery; a software process architecture integrating concurrent and asynchronous processes, incremental and iterative process enaction, distributed multi-site processes, and the people-centered processes; a process-centered software engineering environment for ASP; and experience and lessons learned from the use of ASP in the development of a family of large-scale communication software systems for more than five years.

Journal ArticleDOI
TL;DR: The basic structure of the Simplex architecture is introduced and the types of faults it can handle are described, and the fault detection mechanism based on the trajectories of the physical system in its state space is described.
Abstract: One of the most attractive features of computer-controlled systems should be the ease with which they can be modified to incorporate improvements and new capabilities. It would be desirable to make the software changes in a safe and reliable fashion while the system is running. The Simplex architecture, a real-time software technology developed at the Carnegie Mellon University Software Engineering Institute, is designed for this purpose. We introduce the basic structure of the Simplex architecture and describe the types of faults it can handle. We describe the fault detection mechanism based on the trajectories of the physical system in its state space, and derive the control switching logic that determines which controller is chosen to control the physical system in each sampling period.

Patent
Kukakar Kayhan1
29 Sep 1998
TL;DR: In this paper, a compiler is used to generate a host microprocessor code from a portion of an application software code and a coprocessor code from the portion of the software code, then the compiler creates a code that serves as the software program.
Abstract: A computing system (10) and a method for designing the computing system (10) using hardware and software components. The computing system (10) includes programmable coprocessors (12, 13) having the same architectural style. Each coprocessor includes a sequencer (36) and a programmable interconnect network (34) and a varying number of functional units and storage elements. The computing system (10) is designed by using a compiler (71) to generate a host microprocessor code from a portion of an application software code and a coprocessor code from the portion of the application software code. The compiler (71) uses the host microprocessor code to determine the execution speed of the host microprocessor and the coprocessor code to determine the execution speed of the coprocessor and selects one of the host microprocessor or the coprocessor for execution of the portion of the application software code. Then the compiler (71) creates a code that serves as the software program.

Proceedings ArticleDOI
29 Jul 1998
TL;DR: It is found that the three information visualization designs have inherent problems when used for visualizing different data sets, and that certain tasks cannot be supported by the designs.
Abstract: A number of 3D information visualization designs have been invented during the last few years. However, comparisons of such designs have been scarce, making it difficult for application developers to select a suitable design. This paper reports on a case study where three existing visualization designs have been implemented and evaluated. We found that the three information visualization designs have inherent problems when used for visualizing different data sets, and that certain tasks cannot be supported by the designs. A general methodology for evaluation is presented, which comprises the evaluation of suitability for different data sets as well as the evaluation of support for user tasks.

Patent
LaVerne L. Hoag1
14 Jan 1998
TL;DR: In this article, a method for the selection and assignment of keyboard access mnemonics and accelerator key combinations as part of the application software development process is described, where each function is first evaluated on the basis of usage likelihood and then prioritized before assignment begins in order to maximize the number and quality of successful assignments.
Abstract: A method is described for the selection and assignment of keyboard access mnemonics and accelerator key combinations as part of the application software development process. After determining application functions and categories that require assignment, mnemonics and accelerators are assigned using pre-established recommendations, assignment rules and/or user assignment. If the assignments are made automatically, each function is first evaluated on the basis of usage likelihood and then prioritized before assignment begins in order to maximize the number and quality of successful assignments. In the alternative, assignments can be made on a function-by-function basis.

Proceedings ArticleDOI
04 Mar 1998
TL;DR: In this paper, the authors present the implementation of the Aster development environment that realizes automatic configuration of middleware which are customized to the applications' needs from the standpoint of provided non-functional properties (e.g., fault-tolerance, security).
Abstract: Middleware configurations provide a means to make accessible a wide range of applications on a (possibly large) distributed heterogeneous platform. However, as new application areas appear, middleware configurations will have to evolve to accommodate those new applications' needs. This paper discusses the implementation of the Aster development environment that realizes automatic configuration of middleware which are customized to the applications' needs from the standpoint of provided non-functional properties (e.g., fault-tolerance, security). The environment relies on two main tools. The first tool retrieves the software constituting the middleware that meets application requirements, by means of software specification matching. The second tool implements the interfacing of the application's software components with the customized middleware. Interfacing is discussed in the framework of the CORBA environment, hence addressing construction of customized middleware on top of an ORB, possibly using common object services specified by the OMG.

Proceedings ArticleDOI
01 Apr 1998
TL;DR: This work adapted a control flow-based interprocedural slicing algorithm so that it accounts for interProcedural control dependencies not recognized by other slicing algorithms, and reuses slicing information for improved efficiency.
Abstract: To manage the evolution of software systems effectively, software developers must understand software systems, identify and evaluate alternative modification strategies, implement appropriate modifications, and validate the correctness of the modifications. One analysis technique that assists in many of these activities is program slicing. To facilitate the application of slicing to large software systems, we adapted a control flow-based interprocedural slicing algorithm so that it accounts for interprocedural control dependencies not recognized by other slicing algorithms, and reuses slicing information for improved efficiency. Our initial studies suggest that additional slice accuracy and slicing efficiency may be achieved with our algorithm.

Proceedings ArticleDOI
01 May 1998
TL;DR: An image processing system built around PADDI-2, a custom 48 node MIMD parallel DSP, which supports a multiprocessor system under development (VGI-1), with implementation dependencies isolated in layered encapsulations.
Abstract: We have integrated an image processing system built around PADDI-2, a custom 48 node MIMD parallel DSP. The system includes image processing algorithms, a graphical SFG tool, a simulator, routing tools, compilers, hardware configuration and debugging tools, application development libraries, and software implementations for hardware verification. The system board, connected to a SPARCstation via a custom Sbus controller, contains 384 processors in 8 VLSI chips. The software environment supports a multiprocessor system under development (VGI-1). The software tools and libraries are modular, with implementation dependencies isolated in layered encapsulations.

Proceedings ArticleDOI
13 Nov 1998
TL;DR: A middleware architecture named ROAFTS (Real-time Object-oriented Adaptive Fault Tolerance Support) is presented, designed to support adaptive fault-tolerant execution of not only conventional process-structured distributed real-time (RT) application software but also new-style object- Structured distributed RT application software.
Abstract: A middleware architecture named ROAFTS (Real-time Object-oriented Adaptive Fault Tolerance Support) is presented. ROAFTS is designed to support adaptive fault-tolerant execution of not only conventional process-structured distributed real-time (RT) application software but also new-style object-structured distributed RT application software. While ROAFTS contains fault tolerance schemes devised for quantitatively guaranteed RT fault tolerance, it is also designed to relax that characteristic while the application is in a soft RT phase in order to reduce resource use. Through three different prototype implementation experiences using both commercial operating system kernels and home-grown RT kernels, the middleware architecture has been refined and its basic capabilities and effectiveness have been validated. The fault tolerance schemes supported and the integrating architecture are discussed in this paper. Implementation experiences and some future tasks are also discussed.

Journal ArticleDOI
TL;DR: The design and synthesis of an efficient hardware-software implementation for a multifunction embedded system is formulated, as a codesign problem, by modifying an existing partitioning algorithm used to partition single-function systems.
Abstract: We are interested in optimizing the design of multifunction embedded systems such as multistandard audio/video codecs and multisystem phones. Such systems run a prespecified set of applications, and any "one" of the applications is selected at a run time, depending on system parameters. Our goal is to develop a methodology for the efficient design of such systems. A key observation underlying our method is that it may not be efficient to design for each application separately. This is attributed to two factors. First, considering each application in isolation can lead to application-specific decisions that do not necessarily lead to the best overall system solution. Second, these applications typically tend to have several commonalities among them, and considering applications independently may lead to inconsistent mappings of common tasks in different applications. Our approach is to optimize jointly across the set of applications while ensuring that each application itself meets its timing constraints. Based on these guiding principles, we formulate, as a codesign problem, the design and synthesis of an efficient hardware-software implementation for a multifunction embedded system. The first step in our methodology is to identify nodes that represent similar functionality across different applications. Such "common" nodes are characterized by several metrics such as their repetitions, urgency, concurrency, and performance/area tradeoff. These metrics are quantified and used by a hardware/software partitioning tool to influence hardware/software mapping decisions. The idea behind this is to bias common tasks toward the same resource as far as possible while also considering preferences and timing constraints local to each application. Further, relative criticality of applications is also considered, and the mapping decisions in more critical applications are allowed to influence those in less critical applications. We demonstrate how this is achieved by modifying an existing partitioning algorithm (GCLP) used to partition single-function systems. Our modified algorithm considers global preferences across the application set as well as the preference of each individual application to generate an efficient overall solution while ensuring that timing constraints of each application are met. The overall result of the system-level partitioning process is 1) a hardware or software mapping and 2) a schedule for execution for each node within the application set. On an example set consisting of three video applications, we show that the solution obtained by the use of our method is 38% smaller than that obtained when each application is considered independently.