scispace - formally typeset
Search or ask a question

Showing papers on "Application software published in 2002"


Journal ArticleDOI
TL;DR: In this paper, the authors focus on the nature of the services that respond to protocol messages and propose a set of services that can be aggregated in various ways to meet the needs of virtual organizations, which themselves can be defined by the services they operate and share.
Abstract: Increasingly, computing addresses collaboration, data sharing, and interaction modes that involve distributed resources, resulting in an increased focus on the interconnection of systems both within and across enterprises. These evolutionary pressures have led to the development of Grid technologies. The authors' work focuses on the nature of the services that respond to protocol messages. Grid provides an extensible set of services that can be aggregated in various ways to meet the needs of virtual organizations, which themselves can be defined in part by the services they operate and share.

1,816 citations


Journal ArticleDOI
09 Dec 2002
TL;DR: The current ModelNet prototype is able to accurately subject thousands of instances of a distrbuted application to Internet-like conditions with gigabits of bisection bandwidth, including novel techniques to balance emulation accuracy against scalability.
Abstract: This paper presents ModelNet, a scalable Internet emulation environment that enables researchers to deploy unmodified software prototypes in a configurable Internet-like environment and subject them to faults and varying network conditions. Edge nodes running user-specified OS and application software are configured to route their packets through a set of ModelNet core nodes, which cooperate to subject the traffic to the bandwidth, congestion constraints, latency, and loss profile of a target network topology.This paper describes and evaluates the ModelNet architecture and its implementation, including novel techniques to balance emulation accuracy against scalability. The current ModelNet prototype is able to accurately subject thousands of instances of a distrbuted application to Internet-like conditions with gigabits of bisection bandwidth. Experiments with several large-scale distributed services demonstrate the generality and effectiveness of the infrastructure.

463 citations


Patent
09 Dec 2002
TL;DR: A business method utilizing a system comprising one or more distributed computers, application software, off-the-shelf peripheral components including keyboard-and-mouseless data entry (KDE) devices, business processes, human and KDE device readable data, related information on removable data storage media or available from external databases, and existing communications systems for speeding and improving: 1) personal or business automation, efficiency and productivity, goal attainment; 2) improving, speeding and automating the person-computer interface; 3) selection, acquisition, and tracking usage of items acquired from an existing
Abstract: A business method utilizing a system comprising one or more distributed computers, application software, off-the-shelf peripheral components including keyboard-and-mouseless data entry (KDE) devices, business processes, human and KDE device readable data, related information on removable data storage media or available from external databases, and existing communications systems for speeding and improving: 1) personal or business automation, efficiency and productivity, goal attainment; 2) improving, speeding and automating the person-computer interface; 3) selection, acquisition, and tracking usage of items acquired from an existing supply chain; 4) marketing items and retaining customers buying the products, controlling their usage, and disseminating information about the products.

456 citations


Proceedings ArticleDOI
09 Dec 2002
TL;DR: Controlled physical random functions (CPUFs) are introduced which are PUFs that can only be accessed via an algorithm that is physically bound to the PUF in an inseparable way.
Abstract: A physical random function (PUF) is a random function that can only be evaluated with the help of a complex physical system. We introduce controlled physical random functions (CPUFs) which are PUFs that can only be accessed via an algorithm that is physically bound to the PUF in an inseparable way. CPUFs can be used to establish a shared secret between a physical device and a remote user. We present protocols that make this possible in a secure and flexible way, even in the case of multiple mutually mistrusting parties. Once established, the shared secret can be used to enable a wide range of applications. We describe certified execution, where a certificate is produced that proves that a specific computation was carried out on a specific processor. Certified execution has many benefits, including protection against malicious nodes in distributed computation networks. We also briefly discuss a software licensing application.

430 citations


Proceedings ArticleDOI
21 May 2002
TL;DR: This paper presents the gSOAP stub and skeleton compiler, which provides a unique SOAP-to-C/C++ language binding for deploying C-C++ applications in SOAP Web Services, clients, and peer- to-peer computing networks.
Abstract: This paper presents the gSOAP stub and skeleton compiler. The compiler provides a unique SOAP-to-C/C++ language binding for deploying C/C++ applications in SOAP Web Services, clients, and peer-to-peer computing networks. gSOAP enables the integratation of (legacy) C/C++/Fortran codes, embedded systems, and real-time software in Web Services, clients, and peers that share computational resources and information with other SOAP-enabled applications, possibly across different platforms, language environments, and disparate organizations located behind firewalls. Results on interoperability, legacy code integration, scalability, and performance are given.

315 citations


Proceedings ArticleDOI
24 Jul 2002
TL;DR: A high-performance SOAP implementation and a schema-specific parser based on the results of this investigation are presented and a multiprotocol approach that uses SOAP to negotiate faster binary protocols between messaging participants is recommended.
Abstract: The growing synergy between Web Services and Grid-based technologies will potentially enable profound, dynamic interactions between scientific applications dispersed in geographic, institutional, and conceptual space. Such deep interoperability requires the simplicity, robustness, and extensibility for which SOAP was conceived, thus making it a natural lingua franca. Concomitant with these advantages, however is a degree of inefficiency that may limit the applicability of SOAP to some situations. We investigate the limitations of SOAP for high-performance scientific computing. We analyze the processing of SOAP messages, and identify the issues of each stage. We present a high-performance SOAP implementation and a schema-specific parser based on the results of our investigation. After our SOAP optimizations are implemented, the most significant bottleneck is ASCII/double conversion. Instead of handling this using extensions to SOAP we recommend a multiprotocol approach that uses SOAP to negotiate faster binary protocols between messaging participants.

309 citations


Proceedings ArticleDOI
16 Nov 2002
TL;DR: The automated tuning system is able to tune the application parameers to within a few percent of the best value after evaluating only 11 out of over 1,700 possible configurations.
Abstract: In this paper, we present the Active Harmony automated runtime tuning system. We describe the interface used by programs to make applications tunable. We present the Library Specification Layer which helps program library developers expose multiple variations of the same API using different algorithms. The Library Specification Language helps to select the most appropriate program library to tune the overall performance. We also present the optimization algorithm used to adjust parameters in the application and the libraries. Finally, we present results that show how the system is able to tune several real applications. The automated tuning system is able to tune the application parameters to within a few percent of the best value after evaluating only 11 out of over 1,700 possible configurations.

293 citations


Proceedings ArticleDOI
24 Jul 2002
TL;DR: This work presents a general-purpose resource selection framework that extends the Condor matchmaking framework to support both single-resource and multiple-resource selection, and presents results obtained when this framework is applied in the context of a computational astrophysics application, Cactus.
Abstract: While distributed, heterogeneous collections of computers ("Grids") can in principle be used as a computing platform, in practice the problems of first discovering and then organizing resources to meet application requirements are difficult. We present a general-purpose resource selection framework that addresses these problems by defining a resource selection service for locating Grid resources that match application requirements. At the heart of this framework is a simple, but powerful, declarative language based on a technique called set matching, which extends the Condor matchmaking framework to support both single-resource and multiple-resource selection. This framework also provides an open interface for loading application-specific mapping modules to personalize the resource selector. We present results obtained when this framework is applied in the context of a computational astrophysics application, Cactus. These results demonstrate the effectiveness of our technique.

275 citations


Proceedings ArticleDOI
20 Jun 2002
TL;DR: The location stack is presented, a layered software engineering model for location in ubiquitous computing that encourages system designers to think of their applications in this way, to drive location-based computing toward a common vocabulary and standard infrastructure.
Abstract: Based on five design principles extracted from a survey of location systems, we present the location stack, a layered software engineering model for location in ubiquitous computing. Our model is similar in spirit to the seven-layer Open System Interconnect (OSI) model for computer networks. We map two existing ubiquitous computing systems to the model to illustrate the leverage the location stack provides. By encouraging system designers to think of their applications in this way, we hope to drive location-based computing toward a common vocabulary and standard infrastructure, permitting members of the ubiquitous computing community to easily evaluate and build on each other's work.

254 citations


Patent
15 May 2002
TL;DR: The Operating System Guard as discussed by the authors is a system for creating an application software environment without changing an operating system of a client computer, the system comprising an operating-system abstraction and protection layer, wherein the abstraction layer is interposed between a running software application and said operating system, whereby a virtual environment in which an application may run is provided and application level interactions are substantially removed.
Abstract: The present invention provides a system for creating an application software environment without changing an operating system of a client computer, the system comprising an operating system abstraction and protection layer, wherein said abstraction and protection layer is interposed between a running software application and said operating system, whereby a virtual environment in which an application may run is provided and application level interactions are substantially removed. Preferably, any changes directly to the operating system are selectively made within the context of the running application and the abstraction and protection layer dynamically changes the virtual environment according to administrative settings. Additionally, in certain embodiments, the system continually monitors the use of shared system resources and acts as a service to apply and remove changes to system components. The present thus invention defines an “Operating System Guard.” These components cover the protection semantics required by .DLLs and other shared library code as well as system device drivers, fonts, registries and other configuration items, files, and environment variables.

250 citations


Proceedings ArticleDOI
03 Oct 2002
TL;DR: The paper exploits an object-oriented model of a WA as a test model, and proposes a definition of the unit level for testing the WA, and develops an integrated platform of tools comprising a Web application analyzer, a repository, a test case generator and atest case executor.
Abstract: The rapid diffusion of Internet and open standard technologies is producing a significant growth of the demand of Web sites and Web applications with more and more strict requirements of usability, reliability, interoperability and security. While several methodological and technological proposals for developing Web applications are coining both from industry and academia, there is a general lack of methods and tools to carry out the key processes that significantly impact the quality of a Web application (WA), such as the validation & verification (V&V), and quality assurance. Some open issues in the field of Web application testing are addressed in this paper. The paper exploits an object-oriented model of a WA as a test model, and proposes a definition of the unit level for testing the WA. Based on this model, a method to test the single units of a WA and for the integration testing is proposed. Moreover, in order to experiment with the proposed technique and strategy, an integrated platform of tools comprising a Web application analyzer, a repository, a test case generator and a test case executor, has been developed and is presented in the paper. A case study, carried out with the aim of assessing the effectiveness of the proposed method and tools, produced interesting and encouraging results.

Proceedings ArticleDOI
02 Feb 2002
TL;DR: A complete system power simulator, called SoftWatt, is presented that models the CPU, memory hierarchy, and a low-power disk subsystem and quantifies the power behavior of both the application and operating system.
Abstract: Power dissipation has become one of the most critical factors for the continued development of both high-end and low-end computer systems. We present a complete system power simulator, called SoftWatt, that models the CPU, memory hierarchy, and a low-power disk subsystem and quantifies the power behavior of both the application and operating system. This tool, built on top of the SimOS infrastructure, uses validated analytical energy models to identify the power hotspots in the system components, capture relative contributions of the user and kernel code to the system power profile, identify the power-hungry operating system services and characterize the variance in kernel power profile with respect to workload. Our results using Spec JVM98 benchmark suite emphasize the importance of complete system simulation to understand the power impact of architecture and operating system on application execution.

Journal ArticleDOI
07 Nov 2002
TL;DR: The main focus of the paper is on outlining the methodologies required to realize the potential of reconfigurable architectures for vision applications and the development of fundamental configurable computing models that abstract the underlying hardware for high-level application mapping.
Abstract: Reconfigurable computing is emerging as the new paradigm for satisfying the simultaneous demand for application performance and flexibility. The ability to customize the architecture to match the computation and the data flow of the application has demonstrated significant performance benefits compared to general purpose architectures. Computer vision applications are one class of applications that have significant heterogeneity in their computation and communication structures. At the low level, vision algorithms have regular repetitive computations operating on large sets of image data with predictable data dependencies. At the higher level, the computations have irregular dependencies. Computer vision application characteristics have significant overlap with the advantages of reconfigurable architectures. The main focus of the paper is on outlining the methodologies required to realize the potential of reconfigurable architectures for vision applications. After giving a broad introduction to reconfigurable computing, the advantages of utilizing reconfigurable architectures for vision applications are outlined and illustrated using example computations. The paper discusses the development of fundamental configurable computing models that abstract the underlying hardware for high-level application mapping. The Hybrid System Architecture Model and algorithms utilizing the model are illustrated to demonstrate a formal framework. The paper also outlines ongoing research and provides a comprehensive list of references for further reading.

Journal ArticleDOI
15 Apr 2002
TL;DR: This paper examines the explicit communication characteristics of several sophisticated scientific applications, which, by themselves, constitute a representative suite of publicly available benchmarks for large cluster architectures by focusing on the Message Passing Interface (MPI) and by using hardware counters on the microprocessor.
Abstract: This paper examines the explicit communication characteristics of several sophisticated scientific applications, which, by themselves, constitute a representative suite of publicly available benchmarks for large cluster architectures. By focusing on the Message Passing Interface (MPI) and by using hardware counters on the microprocessor, we observe each application's inherent behavioral characteristics: point-to-point and collective communication, and floating point operations. Furthermore, we explore the sensitivities of these characteristics to both problem size and number of processors. Our analysis reveals several striking similarities across our diverse set of applications including the use of collective operations, especially those collectives with very small data payloads. We also highlight a trend of novel applications parting with regimented, static communication patterns in favor of dynamically evolving patterns, as evidenced by our experiments on applications that use implicit linear solvers and adaptive mesh refinement. Overall, our study contributes a better understanding or the requirements of current and emerging paradigms of scientific computing in terms of their computation and communication demands.

Journal ArticleDOI
01 May 2002
TL;DR: This paper introduces the idea of using a User-Level Memory Thread (ULMT) for correlation prefetching, and shows that the scheme works well in combination with a conventional processor-side sequential prefetcher, in which case the average speedup increases to 1.46.
Abstract: This paper introduces the idea of using a User-Level Memory Thread (ULMT) for correlation prefetching. In this approach, a user thread runs on a general-purpose processor in main memory, either in the memory controller chip or in a DRAM chip. The thread performs correlation prefetching in software, sending the prefetched data into the L2 cache of the main processor. This approach requires minimal hardware beyond the memory processor: the correlation table is a software data structure that resides in main memory, while the main processor only needs a few modifications to its L2 cache so that it can accept incoming prefetches. In addition, the approach has wide usability, as it can effectively prefetch even for irregular applications. Finally, it is very flexible, as the prefetching algorithm can be customized by the user on an application basis. Our simulation results show that, through a new design of the correlation table and prefetching algorithm, our scheme delivers good results. Specifically, nine mostly-irregular applications show an average speedup of 1.32. Furthermore, our scheme works well in combination with a conventional processor-side sequential prefetcher, in which case the average speedup increases to 1.46. Finally, by exploiting the customization of the prefetching algorithm, we increase the average speedup to 1.53.

Proceedings ArticleDOI
04 Mar 2002
TL;DR: This work advocates a network on silicon as a hardware architecture to implement communication between IP cores in future technologies, and as a software model in the form of a protocol stack to structure the programming of NOSs.
Abstract: We advocate a network on silicon (NOS) as a hardware architecture to implement communication between IP cores in future technologies, and as a software model in the form of a protocol stack to structure the programming of NOSs. We claim guaranteed services are essential. In the ETHEREAL NOS they pervade the NOS as a requirement for hardware design, and as foundation for software programming.

Proceedings ArticleDOI
02 Jul 2002
TL;DR: The design and implementation of ControlWare is described, a middleware QoS-control architecture based on control theory, motivated by the needs of performance-assured Internet services that offers a new type of guarantees the authors call convergence guarantees that lie between hard and probabilistic guarantees.
Abstract: Attainment of software performance assurances in open, largely unpredictable environments has recently become an important focus for real-time research. Unlike closed embedded systems, many contemporary distributed real-time applications operate in environments where offered load and available resources suffer considerable random fluctuations, thereby complicating the performance assurance problem. Feedback control theory has recently been identified as a promising analytic foundation for controlling performance of such unpredictable, poorly modeled software systems, the same way other engineering disciplines have used this theory for physical process control. In this paper we describe the design and implementation of ControlWare, a middleware QoS-control architecture based on control theory, motivated by the needs of performance-assured Internet services. It offers a new type of guarantees we call convergence guarantees that lie between hard and probabilistic guarantees. The efficacy of the architecture in achieving its QoS goals under realistic load conditions is demonstrated in the context of web server and proxy QoS management.

Proceedings ArticleDOI
07 Aug 2002
TL;DR: A new QoS-control paradigm based on adaptive control theory is introduced, which eliminates profiling and configuration costs ofQoS-aware software, by completely automating the process in a way that does not require user intervention.
Abstract: Software mechanisms that enforce QoS guarantees often require knowledge of platform capacity and resource demand. This requirement calls for performance measurements and profiling upon platform upgrades, failures, or new installations. The cost of performing such measurements is a significant hurdle to the wide-spread deployment of open QoS-aware software components. In this paper, we introduce a new QoS-control paradigm based on adaptive control theory. The hallmark of this paradigm is that it eliminates profiling and configuration costs of QoS-aware software, by completely automating the process in a way that does not require user intervention. As a case study, we describe, implement and evaluate the control architecture in a proxy cache to provide proportional differentiation on content hit rate. Adaptive control theory is leveraged to manage cache resources in a way that adjusts the quality spacing between classes, independently of the class loads, which cannot be achieved by other cache resource management schemes, such as biased replacement policies, LRV or greedy-dual-size.

Proceedings ArticleDOI
19 May 2002
TL;DR: An empirical study of the validity of multivariate models for predicting software fault-proneness across different applications shows that suitably selectedMultivariate models can predict fault- proneness of modules of different software packages.
Abstract: Planning and allocating resources for testing is difficult and it is usually done on an empirical basis, often leading to unsatisfactory results. The possibility of early estimation of the potential faultiness of software could be of great help for planning and executing testing activities. Most research concentrates on the study of different techniques for computing multivariate models and evaluating their statistical validity, but we still lack experimental data about the validity of such models across different software applications. The paper reports on an empirical study of the validity of multivariate models for predicting software fault-proneness across different applications. It shows that suitably selected multivariate models can predict fault-proneness of modules of different software packages.

Proceedings ArticleDOI
05 Sep 2002
TL;DR: The concept of passive distributed indexing, a general-purpose distributed search service for mobile file sharing applications, which is based on peer-to-peer technology, is presented and it is shown that due to the flexible design PDI can be employed for several kinds of applications.
Abstract: In this paper, we present the concept of passive distributed indexing, a general-purpose distributed search service for mobile file sharing applications, which is based on peer-to-peer technology. The service enables resource-effective searching for files distributed across mobile devices based on simple queries. Building blocks of PDI constitute local broadcast transmission of query- and response messages, together with caching of query results at every device participating in PDI. Based on these building blocks, the need for flooding the entire network with query messages can be eliminated for most application. In extensive simulation studies, we demonstrate the performance of PDI. Because the requirements of a typical mobile file sharing application are not known-or even do not exist at all-we study the performance of PDI for different system environments and application requirements. We show that due to the flexible design PDI can be employed for several kinds of applications.

Proceedings ArticleDOI
26 Aug 2002
TL;DR: An approach based on similarity metrics, to detect duplicated pages in web sites and applications, implemented with HTML language and ASP technology is proposed.
Abstract: A relevant consequence of the expansion of the web and e-commerce is the growth of the demand of new web sites and web applications As a result, web sites and applications are usually developed without a formalized process, and web pages are directly coded in an incremental way, where new pages are obtained by duplicating existing ones Duplicated web pages, having the same structure and just differing for the data they include, can be considered as clones The identification of clones may reduce the effort devoted to test, maintain and evolve web sites and applications Moreover, clone detection among different web sites aims to detect cases of possible plagiarism In this paper we propose an approach based on similarity metrics, to detect duplicated pages in web sites and applications, implemented with HTML language and ASP technology The proposed approach has been assessed by analyzing several web sites and Web applications The obtained results are reported in the paper with respect to some case studies

Proceedings ArticleDOI
15 Apr 2002
TL;DR: The goal of this framework is to provide good resource allocation for Grid applications and to support adaptive reallocation if performance degrades because of changes in the availability of Grid resources.
Abstract: This paper describes the program execution framework being developed by the Grid Application Development Software (GrADS) Project. The goal of this framework is to provide good resource allocation for Grid applications and to support adaptive reallocation if performance degrades because of changes in the availability of Grid resources. At the heart of this strategy is the notion of a configurable object program, which contains, in addition to application code, strategies for mapping the application to different collections of resources and a resource selection model that provides an estimate of the performance of the application on a specific collection of Grid resources. This model must be accurate enough to distinguish collections of resources that will deliver good performance from those that will not. The GrADS execution framework also provides a contract monitoring mechanism for interrupting and remapping an application execution when performance falls below acceptable levels.

Patent
29 Mar 2002
TL;DR: In this paper, the authors present a video reproduction apparatus that reproduces externally supplied package media, which contains video content storing video data and playback control data controlling reproduction of the video data in a specified data format and extensible application software using the video content.
Abstract: A video reproduction apparatus according to the present invention reproduces externally supplied package media. The package media contains video content storing video data and playback control data controlling reproduction of the video data in a specified data format, and extensible application software using the video content. The video reproduction apparatus includes as software pre-stored and executed in internal memory an operating system chosen from operating systems of plural types, middleware for absorbing differences in function according to the type of operating system, and a player application that runs on the middleware level for reproducing the video content. The middleware has a class library including tools used by the player application to play back the package media or to run the extensible application software. The player application consistently reproduces the video content of the package media according to the specified format by way of the tools included in the middleware class libraries. The extensible application software is run through the tools included in the class libraries of the middleware using video content contained in the same package media.

Patent
12 Aug 2002
TL;DR: In this paper, a method and apparatus for configuring a network switch is described, which includes an application software that may be executing on an application subsystem coupled to a system control subsystem.
Abstract: A method and apparatus are provided for configuring a network switch. Method and apparatus may include, in some embodiments, an application software that may be executing on an application subsystem coupled to a system control subsystem. The application software may, in some embodiments, communicate configuration or other information to the system control subsystem, which may then be utilized to configure the network switch. Additionally, data may be transferred to the application software from the system control subsystem or other subsystem relating to configuration information, fault information, or other data.

Patent
10 Oct 2002
TL;DR: In this paper, the authors present a datacast distribution system which allows for the distribution of movies, music, games, application software, and the like using a new or existing terrestrial digital video broadcast (DVB-T) network.
Abstract: According to the present invention there is provided a datacast distribution system which allows for the distribution of movies, music, games, application software, and the like using a new or existing terrestrial digital video broadcast (DVB-T) network.

Proceedings ArticleDOI
19 May 2002
TL;DR: This paper presents an approach to recover the architecture of dynamic web applications, in order to make maintenance more manageable, and is flexible and retargetable to the various technologies that are used in developing web applications.
Abstract: Web applications are the legacy software of the future. Developed under tight schedules, with high employee turn over, and in a rapidly evolving environment, these systems are often poorly structured and poorly documented. Maintaining such systems is problematic.This paper presents an approach to recover the architecture of such systems, in order to make maintenance more manageable. Our lightweight approach is flexible and retargetable to the various technologies that are used in developing web applications. The approach extracts the structure of dynamic web applications and shows the interaction between their various components such as databases, distributed objects, and web pages. The recovery process uses a set of specialized extractors to analyze the source code and binaries of web applications. The extracted data is manipulated to reduce the complexity of the architectural diagrams. Developers can use the extracted architecture to gain a better understanding of web applications and to assist in their maintenance.

Proceedings ArticleDOI
26 Aug 2002
TL;DR: The concept of situation is formalized, and an approach to developing situation-aware application software is presented that utilizes reconfigurable context-sensitive middleware, and is illustrated by an example on Smart Classroom.
Abstract: Ubiquitous computing represents the concept of computing everywhere, making computing and communication essentially transparent to the users. Applications in this type of environments are context sensitive. They use various contexts to adaptively communicate with each other across multiple network environments, such as mobile ad hoc networks, Internet, and mobile phone networks. The property of context-sensitivity often becomes inadequate in these applications, where combinations of multiple contexts and users' actions need to be analyzed over a period of time. Situation-awareness in application software is considered as a desirable property to overcome this limitation. In addition to being context-sensitive, situation-aware applications can respond to both current and historical relationships of specific contexts and device-actions. Currently, no well-defined concept of situation and no general method exist to facilitate the development of situation-aware application software for ubiquitous computing environments. In this paper, the concept of situation is formalized, and an approach to developing situation-aware application software is presented. The approach utilizes our reconfigurable context-sensitive middleware, and is illustrated by an example on Smart Classroom.

Proceedings ArticleDOI
09 Dec 2002
TL;DR: The organization of Strata is described and its extension is demonstrated by building two SVE systems: system call interposition and stack-smashing prevention, which ensures that SVE applications implemented in Strata are available to a wide variety of host systems.
Abstract: Safe virtual execution (SVE) allows a host computer system to reduce the risks associated with running untrusted programs. SVE prevents untrusted programs from directly accessing system resources, thereby giving the host the ability to control how individual resources may be used. SVE is used in a variety, of safety-conscious software systems, including the Java Virtual Machine (JVM), software fault isolation (SFI), system call interposition layers, and execution monitors. While SVE is the conceptual foundation for these systems, each uses a different implementation technology. The lack of a unifying framework for building SVE systems results in a variety of problems: many useful SVE systems are not portable and therefore are usable only on a limited number of platforms; code reuse among different SVE systems is often difficult or impossible; and building SVE systems from scratch can be both time consuming and error prone. To address these concerns, we have developed a portable, extensible framework for constructing SVE systems. Our framework, called Strata, is based on software dynamic translation (SDT), a technique for modifying binary programs as they execute. Strata is designed to be ported easily to new platforms and to date has been targeted to SPARC/Solaris, x86/Linux, and MIPS/IRIX. This portability ensures that SVE applications implemented in Strata are available to a wide variety of host systems. Strata also affords the opportunity for code reuse among different SVE applications by establishing a common implementation framework. Strata implements a basic safe virtual execution engine using SDT The base functionality supplied by this engine is easily extended to implement specific SVE systems. In this paper we describe the organization of Strata and demonstrate its extension by building two SVE systems: system call interposition and stack-smashing prevention. To illustrate the use of the system call interposition extensions, the paper presents implementations of several useful security policies.

Proceedings ArticleDOI
Prashant Pradhan1, Renu Tewari1, Sambit Sahu1, Abhishek Chandra1, Prashant Shenoy1 
07 Aug 2002
TL;DR: This paper describes an observation-based approach for self-managing Web servers that can adapt to changing workloads while maintaining the QoS requirements of different classes and demonstrates the need to manage different resources in the system depending on the workload characteristics.
Abstract: The Web server architectures that provide performance isolation, service differentiation, and QoS guarantees rely on external administrators to set the right parameter values for the desired performance Due to the complexity of handling varying workloads and bottleneck resources, configuring such parameters optimally becomes a challenge In this paper we describe an observation-based approach for self-managing Web servers that can adapt to changing workloads while maintaining the QoS requirements of different classes In this approach, the system state is monitored continuously and parameter values of various system resources-primarily the accept queue and the CPU-are adjusted to maintain the system-wide QoS goals We implement our techniques using the Apache Web server and the Linux operating system We first demonstrate the need to manage different resources in the system depending on the workload characteristics We then experimentally demonstrate that our observation-based system can adapt to workload changes by dynamically adjusting the resource shares in order to maintain the QoS goals

Proceedings ArticleDOI
06 May 2002
TL;DR: This paper studies the use of a reconfigurable architecture platform for embedded control applications aimed at improving real time performance and proposes a new mapping flow and algorithms to partition hardware and software that best utilize this architecture.
Abstract: This paper studies the use of a reconfigurable architecture platform for embedded control applications aimed at improving real time performance. The hw/sw codesign methodology from POLIS is used. It starts from high-level specifications, optimizes an intermediate model of computation (Extended Finite State Machines) and derives both hardware and software, based on performance constraints. We study a particular architecture platform, which consists of a general purpose processor core, augmented with a reconfigurable function unit and data-path to improve run time performance. A new mapping flow and algorithms to partition hardware and software are proposed to generate implementations that best utilize this architecture. Encouraging preliminary results are shown for automotive electronic control examples.