scispace - formally typeset
Search or ask a question

Showing papers on "Systems architecture published in 1998"


Journal ArticleDOI
TL;DR: An overview of the holonic reference architecture for manufacturing systems as developed at PMA-KULeuven, which shows PROSA shows to cover aspects of both hierarchical as well as heterarchical control approaches.

1,408 citations


Book ChapterDOI
30 Mar 1998
TL;DR: The Globus metacomputing toolkit as discussed by the authors describes a resource management architecture that distributes the resource management problem among distinct local manager, resource broker, and resource co-allocator components.
Abstract: Metacomputing systems are intended to support remote and/or concurrent use of geographically distributed computational resources. Resource management in such systems is complicated by five concerns that do not typically arise in other situations: site autonomy and heterogeneous substrates at the resources, and application requirements for policy extensibility, co-allocation, and online control. We describe a resource management architecture that addresses these concerns. This architecture distributes the resource management problem among distinct local manager, resource broker, and resource co-allocator components and defines an extensible resource specification language to exchange information about requirements. We describe how these techniques have been implemented in the context of the Globus metacomputing toolkit and used to implement a variety of different resource management strategies. We report on our experiences applying our techniques in a large testbed, GUSTO, incorporating 15 sites, 330 computers, and 3600 processors.

719 citations


Journal ArticleDOI
TL;DR: A hierarchical user mobility model is developed that closely represents the movement behavior of a mobile user and that, when used with appropriate pattern matching and Kalman filtering techniques, yields an accurate location prediction algorithm, HLP, or hierarchical location prediction, which provides necessary information for advance resource reservation and advance optimal route establishment in wireless ATM networks.
Abstract: Wireless ATM networks require efficient mobility management to cope with frequent mobile handoff and rerouting of connections. Although much attention has been given in the literature to network architecture design to support wide-area mobility in public ATM networks, little has been done to the important issue of user mobility estimation and prediction to improve the connection reliability and bandwidth efficiency of the underlying system architecture. This paper treats the problem by developing a hierarchical user mobility model that closely represents the movement behavior of a mobile user, and that, when used with appropriate pattern matching and Kalman filtering techniques, yields an accurate location prediction algorithm, HLP, or hierarchical location prediction, which provides necessary information for advance resource reservation and advance optimal route establishment in wireless ATM networks.

619 citations


Journal Article
TL;DR: A set of modeling constructs that facilitate the specification of complex software architectures for real-time systems are described that are derived from field-proven concepts originally defined in the ROOM modeling language and shown how they can be represented using the industry-standard Unified Modeling Language (UML).
Abstract: Real-time software systems encountered in telecommunications, aerospace, and defense often tend to be very large and extremely complex. It is crucial in such systems that the software has a well-defined architecture. This not only facilitates construction of the initial system, it also simplifies system evolution. We describe a set of modeling constructs that facilitate the specification of complex software architectures for real-time systems. These constructs are derived from field-proven concepts originally defined in the ROOM modeling language. Furthermore, we show how they can be represented using the industry-standard Unified Modeling Language (UML) by using the powerful extensibility mechanisms of UML.

578 citations


Proceedings Article
01 Jan 1998
TL;DR: The changes to GALAXY that led to this first reference architecture, which makes use of a scripting language for flow control to provide flexible interaction among the servers, and a set of libraries to support rapid prototyping of new servers, are documented.
Abstract: GALAXY is a client-server architecture for accessing on-line information using spoken dialogue that we introduced at ICSLP94. It has served as the testbed for developing human language technologies for our group for several years. Recently, we have initiated a significant redesign of the GALAXY architecture to make it easier for many researchers to develop their own applications, using either exclusively their own servers or intermixing them with servers developed by others. This redesign was done in part due to the fact that GALAXY has been designated as the first reference architecture for the new DARPA Communicator Program. The purpose of this paper is to document the changes to GALAXY that led to this first reference architecture, which makes use of a scripting language for flow control to provide flexible interaction among the servers, and a set of libraries to support rapid prototyping of new servers. We describe the new reference architecture in some detail, and report on the current status of its development.

315 citations


Journal ArticleDOI
TL;DR: A novel, scenario based notation called Use Case Maps (UCMs) for describing, in a high level way, how the organizational structure of a complex system and the emergent behavior of the system are intertwined.
Abstract: The paper presents a novel, scenario based notation called Use Case Maps (UCMs) for describing, in a high level way, how the organizational structure of a complex system and the emergent behavior of the system are intertwined. The notation is not a behavior specification technique in the ordinary sense, but a notation for helping a person to visualize, think about, and explain the big picture. UCMs are presented as "architectural entities" that help a person stand back from the details during all phases of system development. The notation has been thoroughly exercised on systems of industrial scale and complexity and the distilled essence of what has been found to work in practice is summarized. Examples are presented that confront difficult complex system issues directly: decentralized control, concurrency, failure, diversity, elusiveness and fluidity of runtime views of software, self modification of system makeup, difficulty of seeing large scale units of emergent behavior cutting across systems as coherent entities (and of seeing how such entities arise from the collective efforts of components), and large scale.

303 citations


Journal ArticleDOI
01 Jul 1998
TL;DR: In this paper, the authors present a formalism for the definition of software architectures in terms of graphs, where nodes represent the individual agents and edges define their interconnection, and the dynamic evolution of an architecture is defined independently by a "coordinator".
Abstract: We believe that software architectures should provide an appropriate basis for the proof of properties of large software. This goal can be achieved through a clearcut separation between computation and communication and a formal definition of the interactions between individual components. We present a formalism for the definition of software architectures in terms of graphs. Nodes represent the individual agents and edges define their interconnection. Individual agents can communicate only along the links specified by the architecture. The dynamic evolution of an architecture is defined independently by a "coordinator". An architecture style is a class of architectures specified by a graph grammar. The class characterizes a set of architectures sharing a common communication pattern. The rules of the coordinator are statically checked to ensure that they preserve the constraints imposed by the architecture style.

280 citations


Patent
David S. Miller1
04 Mar 1998
TL;DR: In this article, the authors proposed a new signal processing architecture for base stations and gateways (124, 126) used in spread spectrum communication systems (100) that simplifies data transfer, reduces required bus capacity, and does not require special synchronization of signals that are to be combined.
Abstract: A new signal processing architecture for base stations and gateways (124, 126) used in spread spectrum communication systems (100) that simplifies data transfer, reduces required bus capacity, and does not require special synchronization of signals that are to be combined. A series of transmission modules (5081-508M) are used to transfer data to corresponding ones of a series of analog transmitters (4121-412M) used to form communication circuits for each system user. Each transmission module (5081-508M) employs a series of encoders (502MR) and modulators (504MS) to form spread communication signals, using appropriate PN spreading codes. Spread spectrum communication signals from each module (508) for each system user (D), are summed together (5101-510M) and transferred to a single analog transmitter (412) associated with that module. The signals being combined are automatically synchronized by common timing signals used for elements within each module. The number of processing elements within each module is such that at least one processing path is available for each user or user channel over which it is desired to transmit information through the connected analog transmitter (4121-412M). Data is output from the modules (5081-508M) at a greatly reduced transfer rate which can be more easily accommodated using current technology. This is very useful for satellite based communication systems, or high capacity cellular systems, and this system architecture can be accomplished cost effectively using a series of easily manufactured circuit modules.

254 citations


Patent
30 Sep 1998
TL;DR: A client-server system architecture as discussed by the authors allows maintenance of the computer processes in a central location and remote management of their use within a network, which allows the user to review or provide information or take appropriate action.
Abstract: Computer processes for carrying out almost any process may be defined as a series of steps using a plurality of standardized user-interface screens. These standardized interface screens may be linked together in predetermined orders to implement on a client computer activities for which the standardized screens are appropriate to accomplish a pre-defined process. Any number of computer processes may be developed and deployed using the standard interfaces. The computer process automatically takes a user from screen to screen, prompting the user to review or provide information or take appropriate action. Processes may be represented using metadata. Metadata may provide data to a screen rendering process running on a user's workstation with details on how to render one of a plurality of standard screens in a manner which is specific to a particular process. Metadata may be provided to define the steps of the process for enabling navigational capabilities. Metadata may stored in a database and communicated by a process server to a client computer, which acts as a user's workstation. This client-server system architecture allows maintenance of the computer processes in a central location and remote management of their use within a network. Furthermore, any number of application-specific computer processes may be made available and distributed to users without detailed programs for those processes having to be stored at each user workstation. Furthermore, basic interface functions with legacy databases and back-end systems may be provided to each user workstation in a network through the server system.

233 citations


Patent
17 Sep 1998
TL;DR: A fast data transfer collection system using message authentication and contactless RF proximity card technology in non-contact storage and retrieval applications is described in this article. But the system is not suitable for the use of large numbers of tags.
Abstract: A fast data transfer collection system using message authentication and contactless RF proximity card technology in non-contact storage and retrieval applications. The system is generally comprised of Host computers (application computer systems), Target radio frequency (RF) terminals, and a plurality of portable Tags ('smart' or 'proximity' cards). A Host provides specific application functionality to a Tag holder, with a high degree of protection from fraudulent use. A Target provides control of the RF antenna and resolves collisions between multiple Tags in the RF field. A Tag provides reliable, high speed, and well authenticated secure exchanges of data/information with the Host resulting from the use of a custom ASIC design incorporating unique analog and digital circuits, nonvolatile memory, and state logic. Each Tag engages in a transaction with the Target in which a sequence of message exchanges allow data to be read (written) from (to) the Tag. These exchanges establish the RF communication link, resolve communication collisions with other Tags, authenticate both parties in the transaction, rapidly and robustly relay information through the link, and ensure the integrity and incorruptibility of the transaction. The system architecture provides capabilities to ensure the integrity of the data transferred thus eliminating the major problem of corrupting data on the card and in the system. The architecture and protocol are designed to allow simple and efficient integration of the transaction product system into data/information processing installations.

201 citations


Journal ArticleDOI
01 Dec 1998
TL;DR: These services are described, highlighting information that must flow between federates and the Runtime Infrastructure (RTI) software in order to efficiently implement time manage ment algorithms.
Abstract: Time management is required in simulations to ensure that temporal aspects of the system un der investigation are correctly reproduced by the simulation model. This paper describes the time managem...

Journal ArticleDOI
TL;DR: Numerical testing shows that the holonic scheduling method can generate near-optimal schedules with quantifiable quality in a timely fashion, and has comparable computational requirements and performance as compared to the centralized method following single-level Lagrangian relaxation.

Book ChapterDOI
21 Sep 1998
TL;DR: In this paper, the authors describe a digital object and respository architecture for storing and disseminating digital library content, which includes support for heterogeneous data types, accommodation of new types as they emerge, aggregation of mixed, possibly distributed, data into complex objects, and the ability to specify multiple content disseminations of these objects.
Abstract: We describe a digital object and respository architecture for storing and disseminating digital library content. The key features of the architecture are: (1) support for heterogeneous data types; (2) accommodation of new types as they emerge; (3) aggregation of mixed, possibly distributed, data into complex objects; (4) the ability to specify multiple content disseminations of these objects; and (5) the ability to associate rights management schemes with these disseminations. This architecture is being implemented in the context of a broader research project to develop next-generation service modules for a layered digital library architecture.

Proceedings ArticleDOI
07 Nov 1998
TL;DR: The VIA prototype is compared against established research user-level networks using simple communication benchmarks on the same hardware and considered extensions to the VI Architecture that improve its performance for certain types of communication traffic.
Abstract: Rapid developments in networking technology and a rise in clustered computing have driven research studies in high performance communication architectures. In an effort to standardize the work in this area, industry leaders have developed the Virtual Interface Architecture (VIA) specification. This architecture seeks to provide an operating system-independent infrastructure for high-performance user-level networking in a generic environment. This paper evaluates the inherent costs and performance potential of the Virtual Interface Architecture through a prototype implementation over Myrinet. The VIA prototype is compared against established research user-level networks using simple communication benchmarks on the same hardware. We consider extensions to the VI Architecture that improve its performance for certain types of communication traffic and outline further research areas in the VIA design space that merit investigation.

Proceedings ArticleDOI
13 May 1998
TL;DR: This paper introduces a new field-programmable architecture that is targeted at compute-intensive applications that is more area-efficient than traditional FPGAs by a factor of more than 2.5 times.
Abstract: This paper introduces a new field-programmable architecture that is targeted at compute-intensive applications. These applications are important because of their use in the expanding multi-media markets in signal and data processing. We explain the design methodology, layout and implementation of the new architecture. A synthesis method has also been developed with which we have mapped several circuits to the new architecture. In this paper, we show that the invented architecture is more area-efficient than traditional FPGAs by a factor of more than 2.5 times.

Book ChapterDOI
28 Mar 1998
TL;DR: The Open/CAEsar software architecture is presented, which allows to integrate in a common framework different languages/formalisms for the description of concurrent systems, as well as tools with various functionalities, such as random execution, interactive simulation, on-the-fly and exhaustive verification, test generation, etc.
Abstract: This paper presents the Open/CAEsar software architecture, which allows to integrate in a common framework different languages/formalisms for the description of concurrent systems, as well as tools with various functionalities, such as random execution, interactive simulation, on-the-fly and exhaustive verification, test generation, etc. These principles have been fully implemented, leading to an open, extensible, and well-documented programming environment, which allows tools to be developed in a modular framework, independently from any particular description language.

Journal ArticleDOI
B.P. Dave1, N.K. Jha2
TL;DR: This paper addresses the problem of hardware-software cosynthesis of hierarchical heterogeneous distributed embedded system architectures from hierarchical or nonhierarchical task graphs and shows how the cosynthesis algorithm can be easily extended to consider fault tolerance or low-power objectives or both.
Abstract: Hardware-software cosynthesis of an embedded system architecture entails partitioning of its specification into hardware and software modules such that its real-time and other constraints are met. Embedded systems are generally specified in terms of a set of acyclic task graphs. For medium- to large-scale embedded systems, the task graphs are usually hierarchical in nature. The embedded system architecture, which is the output of the cosynthesis system, may itself be nonhierarchical or hierarchical. Traditional nonhierarchical architectures create communication and processing bottlenecks and are impractical for large embedded systems. Such systems require a large number of processing elements and communication links connected in a hierarchical manner, thus forming a hierarchical distributed architecture, to meet performance and cost objectives. In this paper, we address the problem of hardware-software cosynthesis of hierarchical heterogeneous distributed embedded system architectures from hierarchical or nonhierarchical task graphs. Our cosynthesis algorithm has the following features: 1) it supports periodic task graphs with real-time constraints, 2) it supports pipelining of task graphs, 3) it supports a heterogeneous set of processing elements and communication links, 4) it allows both sequential and concurrent modes of communication and computation, 5) it employs a combination of preemptive and nonpreemptive static scheduling, 6) it employs a new task-clustering technique suitable for hierarchical task graphs, and 7) it uses the concept of association arrays to tackle the problem of multirate tasks encountered in multimedia systems. We show how our cosynthesis algorithm can be easily extended to consider fault tolerance or low-power objectives or both. Although hierarchical architectures have been proposed before, to the best of our knowledge, this is the first time the notion of hierarchical task graphs and hierarchical architectures has been supported in a cosynthesis algorithm.

Proceedings ArticleDOI
01 Apr 1998
TL;DR: This work presents two examples of extending UML, an emerging standard design notation, for use with two architecture description languages, C2 and Wright, and suggests a practical strategy for bringing architectural modeling into wider use.
Abstract: Software architecture descriptions are high-level models of software systems. Some researchers have proposed special-purpose architectural notations that have a great deal of expressive power but are not well integrated with common development methods. Others have used mainstream development methods that are accessible to developers, but lack semantics needed for extensive analysis. We describe an approach to combining the advantages of these two ways of modeling architectures. We present two examples of extending UML, an emerging standard design notation, for use with two architecture description languages, C2 and Wright. Our approach suggests a practical strategy for bringing architectural modeling into wider use, namely by incorporating substantial elements of architectural models into a standard design method.

Proceedings ArticleDOI
01 Nov 1998
TL;DR: The paper examines the problem of mapping a high level specification to an arbitrary architecture that uses specific, common bus protocols for interprocessor communication and presents a communication model that allows for easy retargeting to different bus topologies, protocols, and illustrates that global considerations are required to achieve a correct implementation.
Abstract: Designers of distributed embedded systems face many challenges in determining the tradeoffs when defining a system architecture or retargeting an existing design. Communication synthesis, the automatic generation of the necessary software and hardware for system components to exchange data, is required to more effectively explore the design space and automate very error prone tasks. The paper examines the problem of mapping a high level specification to an arbitrary architecture that uses specific, common bus protocols for interprocessor communication. The communication model presented allows for easy retargeting to different bus topologies, protocols, and illustrates that global considerations are required to achieve a correct implementation. An algorithm is presented that partitions multihop communication timing constraints to effectively utilize the bus bandwidth along a message path. The communication synthesis tool is integrated with a system co-simulator to provide performance data for a given mapping.

Journal ArticleDOI
TL;DR: The approach combines a simple, declarative representation with the ability to configure large-scale systems and is in use for actual production applications, and an extension of the standard CSP model.
Abstract: This paper describes the technical principles and representation behind the constraint-based, automated configurator COCOS. Traditionally, representation methods for technical configuration have focused either on reasoning about structure of systems or quantity of components, which is not satisfactory in many target areas that need both. Starting from general requirements on configuration systems, we have developed an extension of the standard CSP model. The constraint-based approach allows a simple system architecture, and a declarative description of the different types of configuration knowledge. Knowledge bases are described in terms of a component-centered knowledge base written in an object-oriented representation language with semantics directly based on the underlying constraint model. The approach combines a simple, declarative representation with the ability to configure large-scale systems and is in use for actual production applications.

Patent
06 May 1998
TL;DR: In this paper, a machine diagnostic system is provided which includes a host computer for determining a health state of a machine, the host computer is operatively coupled to a network backbone and the system architecture includes several software layers which provide for collecting and preprocessing machine data, transmitting the collected and/or preprocessed data over a network and analyzing such for machine diagnosis and process diagnosis.
Abstract: A machine diagnostic system is provided which includes a host computer for determining a health state of a machine. The host computer is operatively coupled to a network backbone. The system also includes a machine diagnostic module adapted to be integrally mounted to a machine, the machine diagnostic module being operatively coupled to the network backbone. The machine diagnostic module collects data relating to operation of the machine and preprocesses the data, and the host computer analyzes the preprocessed data in determining the health state of the machine. The machine diagnostic system includes a system architecture which facilitates the machine diagnosis. The system architecture includes several software layers which provide for collecting and preprocessing machine data, transmitting the collected and/or preprocessed data over a network and analyzing such for machine diagnosis and process diagnosis.

Proceedings ArticleDOI
14 Mar 1998
TL;DR: This paper serves as a general introduction to Bamboo, describing the system's architecture, implementation and future directions, and showing how the system can facilitate the rapid development of robust applications by promoting code reuse via community-wide exchange.
Abstract: Bamboo is a portable system supporting real-time, networked, virtual environments. Unlike previous efforts, this design focuses on the ability of the system to dynamically configure itself without explicit user interaction, allowing applications to take on new functionality after execution. In particular, this framework facilitates the discovery of virtual environments on the network at runtime. Fundamentally, Bamboo offers a compatible set of mechanisms needed for a wide variety of real-time, networked applications. Also included is a particular combination of these mechanisms supporting a dynamically extensible runtime environment. This paper serves as a general introduction to Bamboo. It describes the system's architecture, implementation and future directions. It also shows how the system can facilitate the rapid development of robust applications by promoting code reuse via community-wide exchange.

Patent
06 Apr 1998
TL;DR: In this paper, an architecture for a computer system that runs applications serving multiple users is described. But the architecture is restricted to a single application and each real-time processor can be dedicated to running just one instance of an application.
Abstract: An architecture is disclosed for a computer system that runs applications serving multiple users. The computer system includes multiple processors, some of which run quick applications, i.e., requiring real time response, while others run applications with less stringent requirements. Each real time processor can be dedicated to running just one instance of an application. The processors can be of disparate types running disparate operating systems and optimized for disparate applications. The system is centrally controlled with the processors communicating among themselves over a shared LAN or via a communications switch. The system may also facilitate simultaneous voice and data communications among users. Users communicate with the system using any of a number of standard techniques: including dial-up telephone lines, ISDN, packet access services, ADSL, cable TV and the like.

Proceedings ArticleDOI
16 Apr 1998
TL;DR: This paper develops and validates an analytical model for evaluating various types of architectural alternatives for shared-memory systems with processors that aggressively exploit instruction-level parallelism and shows that the analytical model can be used to gain insights into application performance and to evaluate architectural design trade-offs.
Abstract: This paper develops and validates an analytical model for evaluating various types of architectural alternatives for shared-memory systems with processors that aggressively exploit instruction-level parallelism. Compared to simulation, the analytical model is many orders of magnitude faster to solve, yielding highly accurate system performance estimates in seconds.The model input parameters characterize the ability of an application to exploit instruction-level parallelism as well as the interaction between the application and the memory system architecture. A trace-driven simulation methodology is developed that allows these parameters to be generated over 100 times faster than with a detailed execution-driven simulator.Finally, this paper shows that the analytical model can be used to gain insights into application performance and to evaluate architectural design trade-offs.

Proceedings ArticleDOI
04 Mar 1998
TL;DR: This paper investigates the ability to perform behaviour analysis on systems which conform to the change model and uses Labelled Transition Systems to specify behaviour and Compositional Reachability Analysis to check composite system models.
Abstract: The software architecture of a system is the overall structure of the system in terms of its constituent components and their interconnections Dynamic changes to the instantiated system architecture-to the components and/or interconnections-may take place while it is running In order that these changes do not violate the integrity of the system, we adopt a general model of dynamic configuration which only permits change to occur when the affected portions of the system are quiescent In this paper we investigate the ability to perform behaviour analysis on systems which conform to the change model Our analysis approach associates behavioural specifications with the components of a software architecture and analyses the behaviour of systems composed from these components We use Labelled Transition Systems to specify behaviour and Compositional Reachability Analysis to check composite system models We model the changes that can occur and use analysis to check that the architecture satisfies the properties required of it: before, during and after the change The paper uses an example to illustrate the approach and discusses some issues arising from the work

Proceedings ArticleDOI
01 Jan 1998
TL;DR: The design of Symphony is described -- an integrated file system that achieves the coexistence of multiple data type specific techniques and some of the novel features of Symphony include: a QoS-aware disk scheduling algorithm; support for data typespecific placement, failure recovery, and caching policies; and support for assigning data type Specific structure to files.
Abstract: An integrated multimediafile system supports the storage and retrieval of multiple data types. In this paper, we first discuss various design methodologies for building integrated file systems and examine their tradeoffs. We argue that, to efficiently support the storage and retrieval of heterogeneous data types, an integratedfile system should enable the coexistence of multiple data type specific techniques. We then describe the design of Symphony-an integrated file system that achieves this objective. Some of the novel features of Symphony include: a QoS-aware disk scheduling algorithm; support for data type specific placement, failure recovery, and caching policies; and support for assigning data type specific structure tofiles. We discuss the prototype implementation of Symphony, and present results of our preliminary experimental evaluation.

Journal ArticleDOI
01 Mar 1998
TL;DR: Interest in force-feedback devices is gaining momentum in the commercial sector however, notably in the area of personal computer games, and the authors believe this interest will drive down the cost of components and spur research efforts so that better, more cost-effective force- feedback devices will be available to the medical community for use in widespread surgical-simulation systems.
Abstract: Surgical simulation can provide great benefits to medicine by reducing the cost and duration of training and making the process more intuitive and informative. However, a simulation system imposes stringent requirements on the human-machine interface. A sense of touch greatly enhances the simulation experience, since much of the skill that a medical professional possesses is in his ability to explore and diagnose by touch. This sensory input can be provided by an input device with force and/or tactile feedback. There are many technical challenges associated with the creation of a robust surgical-simulation system incorporating touch feedback. The medical application has unique needs that drive the design of the mechanism, the control scheme, the tissue deformation engine, and the overall system architecture and distribution of computation. This technology is not yet mature; several companies are dedicated to creating various parts of a simulation system, but as yet there are no commercially available solutions that are cost effective. Interest in force-feedback devices is gaining momentum in the commercial sector however, notably in the area of personal computer games. The authors believe this interest will drive down the cost of components and spur research efforts so that better, more cost-effective force-feedback devices will be available to the medical community for use in widespread surgical-simulation systems.

Proceedings ArticleDOI
07 Sep 1998
TL;DR: MMLite is a modular system architecture that is suitable for a wide variety of hardware and applications and provides a selection of object-based components that are dynamically assembled into a full application system.
Abstract: MMLite is a modular system architecture that is suitable for a wide variety of hardware and applications The system provides a selection of object-based components that are dynamically assembled into a full application system Amongst these components is a namespace, which supports a new programming model, where components are automatically loaded on demand The virtual memory manager is optional and is loaded on demand Components can be easily replaced and reimplemented A third party independently replaced the real-time scheduler with a different implementation Componentization reduced the development time and led to a flexible and understandable system MMLite is efficient, portable, and has a very small memory footprint It runs on several microprocessors, including two VLIW processors It is being used on processors that are embedded in a number of multimedia DirectX accelerator boards

01 Jan 1998
TL;DR: This thesis contributes to ongoing standardization e orts that aim to support fault tolerance in CORBA, using entity redundancy by proposing a system model and an open architecture to add support for object groups to the CORBA middleware environment.
Abstract: Distributed computing is one of the major trends in the computer industry. As systems become more distributed, they also become more complex and have to deal with new kinds of problems, such as partial crashes and link failures. To answer the growing demand in distributed technologies, several middleware environments have emerged during the last few years. These environments however lack support for \one-to-many" communication primitives; such primitives greatly simplify the development of several types of applications that have requirements for high availability, fault tolerance, parallel processing, or collaborative work. One-to-many interactions can be provided by group communication. It manages groups of objects and provides primitives for sending messages to all members of a group, with various reliability and ordering guarantees. A group constitutes a logical addressing facility: messages can be issued to a group without having to know the number, identity, or location of individual members. The notion of group has proven to be very useful for providing high availability through replication: a set of replicas constitutes a group, but are viewed by clients as a single entity in the system. This thesis aims at studying and proposing solutions to the problem of object group support in object-based middleware environments. It surveys and evaluates di erent approaches to this problem. Based on this evaluation, we propose a system model and an open architecture to add support for object groups to the CORBA middleware environment. In doing so, we provide the application developer with powerful group primitives in the context of a standard object-based environment. This thesis contributes to ongoing standardization e orts that aim to support fault tolerance in CORBA, using entity redundancy. The group architecture proposed in this thesis | the Object Group Service (OGS) | is based on the concept of component integration. It consists of several distinct components that provide various facilities for reliable distributed computing and that are reusable in isolation. Group support is ultimately provided by combining these components. OGS de nes an object-oriented framework of CORBA components for reliable distributed systems. The OGS components include a group membership service, which keeps track of the composition of object groups, a group multicast service, which provides delivery of messages to all group members, a consensus service, which allows several CORBA objects to resolve distributed agreement problems, and a monitoring service, which provides distributed failure detection mechanisms. OGS includes support for dynamic group membership and for group multicast with various reliability and ordering guarantees. It de nes interfaces for active and primary-backup replication. In addition, OGS proposes several execution styles and various levels of transparency. A prototype implementation of OGS has been realized in the context of this thesis. This implementation is available for two commercial ORBs (Orbix and VisiBroker). It relies solely on the CORBA speci cation, and is thus portable to any compliant ORB. Although the main theme of this thesis deals with system architecture, we have developed some original algorithms to implement group support in OGS. We analyze these algorithms and implementation choices in this dissertation, and we evaluate them in terms of e ciency. We also illustrate the use of OGS through example applications.

Book ChapterDOI
17 Aug 1998
TL;DR: This work designed, implemented and compared various architecture options of DES, using the Data Encryption Standard as an example algorithm, and found that it could achieve encryption rates beyond 400 Mbit/s using a standard Xilinx FPGA.
Abstract: Most modern security protocols and security applications are defined to be algorithm independent, that is, they allow a choice from a set of cryptographic algorithms for the same function. Although an algorithm switch is rather difficult with traditional hardware, i.e., ASIC, implementations, Field Programmable Gate Arrays (FPGAs) offer a promising solution. Similarly, an ASIC-based key search machine is in general only applicable to one specific encryption algorithm. However, a key-search machine based on FPGAs can also be algorithm independent and thus be applicable to a wide variety of ciphers. We researched the feasibility of a universal key-search machine using the Data Encryption Standard (DES) as an example algorithm. We designed, implemented and compared various architecture options of DES with strong emphasis on high-speed performance. Techniques like pipelining and loop unrolling were used and their Effectiveness for DES on FPGAs investigated. The most interesting result is that we could achieve encryption rates beyond 400 Mbit/s using a standard Xilinx FPGA. This result is by a factor of about 30 faster than software implementations while we are still maintaining flexibility. A DES cracker chip based on this design could search 6.29 million keys per second.