scispace - formally typeset
Search or ask a question

Showing papers presented at "International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing in 2012"


Proceedings ArticleDOI
11 Apr 2012
TL;DR: A general CPS architecture based on Service-Oriented Architecture (SOA) is proposed, the main advantage of this proposed architecture is the integration flexibility of services and components.
Abstract: With the goal of accomplish the ubiquitous intelligence in social life, Cyber-Physical Systems (CPS) are getting growing attentions of researchers and engineers. However, the complexity of computing and physical dynamics bring a lot of challenges in the development of CPS, such as integration of heterogeneous physical devices, system verification, security assurance, and so on. A general or unified architecture plays an important part in the process of CPS design. In this paper, we review the current and previous works of CPS architecture, and introduce the main challenges and techniques of architecture development : real-time control, security assurance, integration mechanism. Then we propose a general CPS architecture based on Service-Oriented Architecture (SOA), the main advantage of this proposed architecture is the integration flexibility of services and components. At the end, we introduce the typical applications of CPS, and suggest the future research areas.

89 citations


Proceedings ArticleDOI
11 Apr 2012
TL;DR: Through the simulation results, this paper has demonstrated that UTM may not be a feasible approach to security implementation as it may become a bottleneck for the application clouds.
Abstract: The key security challenges and solutions on the cloud have been investigated in this paper with the help of literature reviews and an experimental model created on OPNET that is simulated to produce useful statistics to establish the approach that the cloud computing service providers should take to provide optimal security and compliance. The literatures recommend the concept of Security-as-a-Service using unified threat management (UTM) for ensuring secured services on the cloud. Through the simulation results, this paper has demonstrated that UTM may not be a feasible approach to security implementation as it may become a bottleneck for the application clouds. The fundamental benefits of cloud computing (resources on demand and high elasticity) may be diluted if UTMs do not scale up effectively as per the traffic loads on the application clouds. Moreover, it is not feasible for application clouds to absorb the performance degradation for security and compliance because UTM will not be a total solution for security and compliance. Applications also share the vulnerabilities just like the systems, which will be out of UTM cloud's control.

43 citations


Proceedings ArticleDOI
11 Apr 2012
TL;DR: A novel landmark-based QoS prediction framework is proposed and then two clustering-based prediction algorithms for Web services, named UBC and WSBC, are presented, aiming at enhancing the QoS Prediction accuracy via clustering techniques.
Abstract: The rising popularity of service-oriented architecture to construct versatile distributed systems makes Web service recommendation and composition a hot research topic. It's a challenge to design accurate personalized QoS prediction approaches for Web service recommendation due to the unpredictable Internet environment and the sparsity of available historical QoS information. In this paper, we propose a novel landmark-based QoS prediction framework and then present two clustering-based prediction algorithms for Web services, named UBC and WSBC, aiming at enhancing the QoS prediction accuracy via clustering techniques. Hierarchical clustering is adopted based on the real-word Web service QoS dataset collected with PlanetLab1, which contains response-time values of 200 distributed service users and 1,597 Web services. The comprehensive experimental comparison and analysis show that our clustering-based approaches outperform other existing methods.

43 citations


Proceedings ArticleDOI
11 Apr 2012
TL;DR: This paper identifies scalability factors and discusses their impacts on the scalability of SaaS applications, and suggests some alternatives to improve SAAS scalability based on the factors identified.
Abstract: An important issue faced by Software-as-a-Service (SaaS) application is scalability. Each SaaS application is typically shared by multiple (tens or hundreds) organizations (tenants). Each tenant may have hundreds or thousands of users. Thus, the number of concurrent accesses is high. Handling a large number of user requests effectively is critical for SaaS applications. Various aspects of SaaS can have a significant impact on its scalability, including levels of scalability mechanisms, automated migration, tenant awareness, workload support, fault-tolerance and recovery, software architecture and database access. This paper identifies scalability factors and discusses their impacts on the scalability of SaaS applications. Existing approaches for addressing the scalibility of SaaS applications are also analyzed, and this paper suggests some alternatives to improve SaaS scalability based on the factors identified.

38 citations


Proceedings ArticleDOI
11 Apr 2012
TL;DR: A dynamic resource provisioning mechanism for over allocating the capacity of Cloud data centers based on customer resource utilization patterns is introduced, which reduces the impact on Real-Time constraints while improvements on the overall energy-efficiency are sought.
Abstract: This paper introduces a dynamic resource provisioning mechanism for over allocating the capacity of Cloud data centers based on customer resource utilization patterns. The proposed mechanism reduces the impact on Real-Time constraints while improvements on the overall energy-efficiency are sought. The main idea is to exploit the resource utilization patterns of each customer for smartly under allocating resources to the requested Virtual Machines. This reduces the waste produced by frequent overestimations and increases the data center availability. Consequently, it creates the opportunity to host additional Virtual Machines in the same computing infrastructure improving its energy-efficiency. In order to mitigate the negative effect on deadlines, the proposed over allocation service implements a multiplayer Neural Network to anticipate the resource usage patterns based on historical data. Additionally, a compensation mechanism for adjusting the resource allocation in cases of unexpected higher demand is also described. The experiments contrast the proposed approach against traditional "Dynamic Resource Resizing" energy-aware mechanisms and also to our previous work that implements Low-Pass-Filter as predictor. Results demonstrate meaningful improvements in energy-efficiency while time constraints are slightly affected.

31 citations


Proceedings ArticleDOI
11 Apr 2012
TL;DR: A generic VHDL template is developed which allows a scalable parallelization and pipelining of 2D stencil code applications in relation to application and hardware constraints and is implemented as an efficient parameterizable buffering and parallel processing scheme in FPGAs.
Abstract: The efficient realization of self-organizing systems based on 2D stencil code applications, like our developed Marching Pixel algorithms, is a great challenge. They are data-intensive and also computational-intensive, because often a high number of iterations is required. FPGAs are predestined for the realization of these algorithms. They are very flexible, allow a scalable parallel processing and have a moderate power consumption, even in high-performance versions. Therefore, FPGAs are highly qualified to make these applications also real-time capable. Our goal was to implement an efficient parameterizable buffering and parallel processing scheme for such operations in FPGAs, to process them as fast as possible. We developed a generic VHDL template which allows a scalable parallelization and pipelining of 2D stencil code applications in relation to application and hardware constraints.

31 citations


Proceedings ArticleDOI
11 Apr 2012
TL;DR: This work proposes a well-defined modeling approach for the interaction based on components as basic structural elements, the contract paradigm for the design of the interaction, and graph transformations, which addresses the adaptivity of system of systems.
Abstract: The scope of this paper is collaborative, distributed safety critical systems which build up a larger scale system of systems (SoS). Systems participating in an SoS follow both global as well as individual goals, which may be contradicting. Both the global and local goals of the overall SoS may change over time. Hence, self-adaptive ness, i.e., reconfiguration of the SoS as a reaction on changes within its context is a major characteristic of this systems. The aim of this paper is to describe first steps towards a modeling formalism for SoS in a safety critical context. The challenge is to address on the one hand the required flexibility to adapt the system during run-time and on the other hand to guarantee that the system reacts still in a safe manner. To address these challenges, we propose an approach which guarantees that the system still reacts in a safe manner while adaption to uncertainty including context changes. This adaption has to be assumed as unsafe during design time. The key for having success is to define the interaction between the systems as well as its goals as basic elements of the design. Based on our former work, we propose a well-defined modeling approach for the interaction based on components as basic structural elements, the contract paradigm for the design of the interaction, and graph transformations, which addresses the adaptivity of system of systems. The component model is additionally explicitly enriched by goals, which supports so called evaluation functions to determine the level of target achievement.

27 citations


Proceedings ArticleDOI
11 Apr 2012
TL;DR: This paper introduces a spatiotemporal consistence language for real-time systems (Shortly, STeC), which provides a location-triggered specification for Real-time Systems (IoT) and Cyber-Physical Systems (CPS).
Abstract: Internet of Things (IoT) or Cyber-Physical Systems (CPS) is a new trend of real-time systems in the area of information technology. This paper introduces a spatiotemporal consistence language for real-time systems (Shortly, STeC). The consistence requires that a process do its tasks at the required location or time. Thus, this language provides a location-triggered specification for real-time systems. The interaction between real-time agents deals with agent2agent communications. STeC looks like an extension of precess algebra CSP. But, the execution-time of actions and status of agents are stressed. Two kinds of interrupts time and interaction break are considered. Following the Dijkstra's guard style, nondeterministic choice phase guarded by communications is introduced. After setting up the syntax, its operational semantics is introduced. As an example, the railroad crossing problem is specified in terms of this language STeC.

23 citations


Proceedings ArticleDOI
11 Apr 2012
TL;DR: A new Model Based Development (MBD) approach that skips the code generation step by compiling directly UML models and performs optimizations that code compilers are unable to perform resulting in a more compact assembly code.
Abstract: Due to the definition of fUML (Foundational Subset for Executable UML Models) along with its action language Alf (Action Language for fUML), UML (Unified Modeling Language) allows the production of executable models on which early verification and validation activities can be conducted. Despite this effort of standardization and the large use of UML in industry, developers still hand tune the code generated from models to correct, enhance or optimize it. This results in a gap between the model and the generated code. Manual code tuning except from being error prone can invalidate all the analysis and validations already done in the model. To avoid the code hand tuning drawbacks and, since UML is becoming an executable language, we propose a new Model Based Development (MBD) approach that skips the code generation step by compiling directly UML models. The biggest challenge for this approach -- tackled in this paper is to propose a model compiler that is more efficient than a code compiler for UML models. Our model compiler performs optimizations that code compilers are unable to perform resulting in a more compact assembly code.

20 citations


Proceedings ArticleDOI
11 Apr 2012
TL;DR: This paper presents an approach to multi-tier knowledge representation for cognitive robots, where ontologies are integrated with rules and Bayesian networks, which allows for efficient and comprehensive knowledge structuring and awareness based on logical and statistical reasoning.
Abstract: Cognitive robotics are autonomous systems capable of artificial reasoning. Such systems can be achieved with a logical approach, but still AI struggles to connect the abstract logic with real-world meanings. Knowledge representation and reasoning help to resolve this problem and to establish the vital connection between knowledge, perception, and action of a robot. Cognitive robots must use their knowledge against the perception of their world and generate appropriate actions in that world in compliance with some goals and beliefs. This paper presents an approach to multi-tier knowledge representation for cognitive robots, where ontologies are integrated with rules and Bayesian networks. The approach allows for efficient and comprehensive knowledge structuring and awareness based on logical and statistical reasoning.

19 citations


Proceedings ArticleDOI
11 Apr 2012
TL;DR: The aggressive task allocation strategy presented in this paper allows to halve the worst case execution times for the self-X-properties compared to previous strategies thus improving the suitability of the AHS for hard real-time systems.
Abstract: We present an aggressive task allocation strategy for an Artificial Hormone System (AHS) The AHS is a completely decentralized operation principle for a middleware which can be used to allocate tasks in a system of heterogeneous processing elements (PEs) or cores Tasks are scheduled according to suitability of the heterogeneous PEs, current PE load and task relationships In addition, the AHS provides properties like self-configuration, self-optimization and self-healing by task allocation The AHS is able to guarantee real-time bounds regarding these self-X-properties The aggressive task allocation strategy presented in this paper allows to halve the worst case execution times for the self-X-properties compared to previous strategies thus improving the suitability of the AHS for hard real-time systems

Proceedings ArticleDOI
11 Apr 2012
TL;DR: A Web-of-Things based CPS framework for event handling and processing is proposed and a case study for achieving demand response in a smart home is provided.
Abstract: Cyber-Physical Systems (CPS) provides a smart infrastructure connecting abstract computational artifacts with the physical world. This paper presents some challenges for developing distributed real-time Cyber-Physical Systems. The focus is on one particular challenge, namely event modelling in distributed real-time CPS. A Web-of-Things based CPS framework for event handling and processing is proposed. To illustrate the application of the proposed framework, a case study for achieving demand response in a smart home is provided.

Proceedings ArticleDOI
11 Apr 2012
TL;DR: This paper presents a lightweight verification technique, applicable to dependable real-time systems, provided that the (abstract) model and the (concrete) implementation of the system under test are given in advance.
Abstract: This paper presents a lightweight verification technique, which is applicable to dependable real-time systems, provided that the (abstract) model and the (concrete) implementation of the system under test are given in advance. In addition to the usual quality assurance techniques at design time (e.g., formal verification) and at implementation time (e.g., testing), we provide a special form of model checking at run time. That is, we check the correctness of an actual system execution by means of exploring a partial model space covering the current execution trace. In doing so, concrete state information is observed from time to time while the system to be checked is running. This runtime information is used to guide model checking to reduce the model space to be explored. In this sense, we call this method online model checking. Since we do not directly check the execution trace itself, our online checking at model level is capable of checking a running system some steps ahead of the actual state of execution. In this paper, we describe online model checking as well as the underlying system architecture in general, explain the basic algorithm and its extension to improve performance, and provide experimental results.

Proceedings ArticleDOI
11 Apr 2012
TL;DR: This paper discusses why the traditional virtual machine monitor design is not appropriate for embedded systems, and how the features of SPUMONE allow us to design modern complex embedded systems with less efforts.
Abstract: In this paper, we introduce a lightweight processor abstraction layer named SPUMONE. SPUMONE provides virtual CPUs for respective guest OSes, and schedules them according to their priorities. In a typical case, SPUMONE schedules Linux with a low priority and an RTOS with a high priority. We first discuss why the traditional virtual machine monitor design is not appropriate for embedded systems, and how the features of SPUMONE allow us to design modern complex embedded systems with less efforts. Then, we describe two features of SPUMONE for the real-time resource management. SPUMONE also offers a novel mechanism to protect a critical component from malicious programs injected into the GPOS kernel.

Proceedings ArticleDOI
11 Apr 2012
TL;DR: A prediction framework to predict real-time component performance effectively is proposed that builds feature models based on the past usage experience of different users and employs time series analysis techniques on feature trends to make performance prediction.
Abstract: Cloud computing provides access to large pools of distributed components for building high-quality applications. User-side performance of cloud components highly depends on the remote server status as well as the unpredictability of the Internet, which are variable over time. It is an important task to explore an method to predict the real-time performance of cloud components. To address this critical challenge, this paper proposes a prediction framework to predict real-time component performance effectively. Our prediction framework builds feature models based on the past usage experience of different users and employs time series analysis techniques on feature trends to make performance prediction. The results of large-scale experiments show the effectiveness and efficiency of our method.

Proceedings ArticleDOI
11 Apr 2012
TL;DR: An economic model to control the overbooking policy while provide users probability based performance guarantee using risk estimation is proposed and GreedySelePod algorithm is designed to achieve traffic localization in order to reduce network bandwidth consumption and increase the revenue.
Abstract: Efficient resource management in the virtualized data center is always a practical concern and has attracted significant attention. In particularly, economic allocation mechanism is desired to maximize the revenue for commercial cloud providers. This paper uses overbooking from Revenue Management to avoid resource over-provision according to its runtime demand. We propose an economic model to control the overbooking policy while provide users probability based performance guarantee using risk estimation. To cooperate with overbooking policy, we optimize the VM placement with traffic-aware strategy to satisfy application's QoS requirement. We design GreedySelePod algorithm to achieve traffic localization in order to reduce network bandwidth consumption, especially the network bottleneck bandwidth, thus to accept more requests and increase the revenue in the future. The simulation results show that our approach can greatly improve the request acceptance rate and increase the revenue by up to 87% while with acceptable resource confliction.

Proceedings ArticleDOI
11 Apr 2012
TL;DR: This paper presents a scheduling simulator with application modeling capabilities for real-time applications, and the proposed approach supports modeling of complex task control flows and dependency relations between tasks.
Abstract: There are several scheduling simulators to verify the behavior of real-time applications under different task scheduling algorithms. Current simulators cannot model the application accurately, and consequently the result of the simulation differs considerably from the actual behavior on a real computing system. This paper presents a scheduling simulator with application modeling capabilities for real-time applications. The proposed approach supports modeling of complex task control flows and dependency relations between tasks. In order to evaluate the modeling capabilities, we modeled a real engine control application and simulated it. We measured the response times of the application model running on our scheduling simulator and compared them with the ones obtained by running the real engine application binaries on an instruction set simulator. The average percentage errors in mean and worst-case response time between both simulations were only 9.6% and 8.6% respectively.

Proceedings ArticleDOI
11 Apr 2012
TL;DR: A dynamic memory scheduling system called DMSS is presented, which can manage memory resources in server consolidation environments and allocate memory among virtual machines on demand and bring in economic benefits to cloud service providers because more virtual machines can be accommodated at lower costs.
Abstract: As the foundation of cloud computing, Server consolidation allows multiple computer infrastructures running as virtual machines in a single physical node It improves the utilization of most kinds of resource but memory under x86 architecture Because of inaccurate memory usage estimate and the lack of memory resource management, there is much service performance degradation in data centers, even though they have occupied a large amount of memory Furthermore, memory becomes insufficient for a physical server when a lot of virtual machines depend on it In order to improve this, we present a dynamic memory scheduling system called DMSS, which can manage memory resources in server consolidation environments and allocate memory among virtual machines on demand We have designed and implemented the corresponding memory scheduling policy based on Xen virtualization platform to enhance memory efficiency and achieve service level agreement The benchmark shows that DMSS can make an accurate and rapid response to memory changes and save more than 30% physical memory with less than 5% performance degradation DMSS actually brings in economic benefits to cloud service providers because more virtual machines can be accommodated at lower costs

Proceedings ArticleDOI
11 Apr 2012
TL;DR: A compositional approach to schedulability analysis of safety-critical Java programs is presented, where Schedulability is checked on a model composed of the abstract specifications, and as the specifications are implemented, these implementations can be checked individually.
Abstract: We present a compositional approach to schedul ability analysis of safety-critical Java programs. We introduce a specification language in order to write abstract behavioural specifications regarding task execution-time and use of resources. Schedulabilty is checked on a model composed of the abstract specifications, possibly before any implementation, and as the specifications are implemented, these implementations can be checked individually. This means that library routines potentially can be separately checked and reused, and individual tasks can be verified according to their specifications without performing the full-system-analysis.

Proceedings ArticleDOI
11 Apr 2012
TL;DR: Experimental evaluations of global and partitioned semi-fixed- priority scheduling algorithms in the extended imprecise computation model on multicore systems show that semi- fixed-priority scheduling has comparable overhead to fixed- Priority scheduling.
Abstract: Nowadays multicore systems have been used in real-time applications such as robots. In robots, imprecise tasks such as image processing tasks are required to detect and avoid objects. However, existing real-time operating systems have evaluated multiprocessor real-time scheduling algorithms in Liu and Lay land's model and have not evaluated those in the imprecise computation model. This paper performs experimental evaluations of global and partitioned semi-fixed-priority scheduling algorithms in the extended imprecise computation model on multicore systems. Experimental results show that semi-fixed-priority scheduling has comparable overhead to fixed-priority scheduling. In addition, global semi-fixed-priority scheduling has lower overhead than partitioned semi-fixed-priority scheduling.

Proceedings ArticleDOI
11 Apr 2012
TL;DR: The lessons learned in architecting and applying a two-level health management strategy to assemblies of software components towards component-based software systems are presented.
Abstract: Complex real-time software systems require an active fault management capability. While testing, verification and validation schemes and their constant evolution help improve the dependability of these systems, an active fault management strategy is essential to potentially mitigate the unacceptable behaviors at run-time. In our work we have applied the experience gained from the field of Systems Health Management towards component-based software systems. The software components interact via well-defined concurrency patterns and are executed on a real-time component framework built upon ARINC-653 platform services. In this paper, we present the lessons learned in architecting and applying a two-level health management strategy to assemblies of software components.

Proceedings ArticleDOI
11 Apr 2012
TL;DR: The tool JCopter is integrated with the WCET analysis tool and is used to explore different in lining strategies and results in a reduction of theWCET by a few percent up to a factor of about 2.5 on real-time benchmarks.
Abstract: Standard compilers optimize execution time for the average case. However, in hard real-time systems the worst-case execution time (WCET) is of primary importance. Therefore, a compiler for real-time systems shall include optimizations that aim to minimize the WCET. One effective compiler optimization is method in lining. It is especially important for languages, like Java, where small setter and getter methods are considered good programming style. In this paper we present and explore WCET driven in lining of Java methods. We use the WCET analysis tool for the Java processor JOP to guide to optimization along the worst-case path. The tool JCopter is integrated with the WCET analysis tool and is used to explore different in lining strategies. On real-time benchmarks the optimization results in a reduction of the WCET by a few percent up to a factor of about 2.

Proceedings ArticleDOI
11 Apr 2012
TL;DR: A robust real-time line tracking algorithm for UAV using image processing that first reduces computation region, and then transform to Hough space to detect lines in frame, and finally, the line tracking is performed.
Abstract: Autonomous landing for fixed-wing aircraft is one of issues in smart Unmanned Aerial Vehicles (UAV). In this paper, we present a robust real-time line tracking algorithm for UAV using image processing. In the proposed algorithm, we first reduce computation region, and then transform to Hough space to detect lines in frame. Finally, the line tracking is performed. Experimental model has been designed in MATLAB Simulink environment and evaluated with several landing video clips. We took tolerance boundary test to check robustness. Experimental results indicate this approach is applicable in real uses.

Proceedings ArticleDOI
11 Apr 2012
TL;DR: This paper identifies a number of characteristics that affect the ability for a RTA to fulfill specified deadlines in a federated Cloud environment as a result of deploying environment diverse fault-tolerant schemes and performs initial experiments to justify the feasibility of this problem.
Abstract: Dependability is a critical concern in provisioning services in Cloud Computing environments. This is true when considering reliability, an attribute of dependability that is a critical and challenging problem in a Cloud context [2]. Fault-tolerance is one means to attain reliability, and is typically implemented by using some form of diversity. Federated Cloud, which is an emerging Cloud paradigm that orchestrates multiple Clouds, is able to implement environmental diversity for Cloud applications with relative ease and minimal additional cost to the consumer due to its inherent design. Real-Time Applications (RTAs) can benefit from deploying fault-tolerant schemes to fulfill deadlines in the presence of faults as they enable the provisioning of correct service in the event of a component in the application failing. However, this diversity can potentially become an issue when designing dynamically scalable fault-tolerant RTAs in a federated Cloud environment while also fulfilling QoS demands. In particular, building fault-tolerant RTAs by using the diversity of the Virtual Machine (VM) configurations and of the underlying Cloud infrastructure can have a negative impact on the ability to fulfill deadlines whilst still allowing the application to dynamically provision VMs with minimal human interaction. This paper identifies a number of characteristics that affect the ability for a RTA to fulfill specified deadlines in a federated Cloud environment as a result of deploying environment diverse fault-tolerant schemes. Furthermore we have designed and performed initial experiments using a real world Cloud federation to justify the feasibility of this problem. Results demonstrate that deploying RTAs in a federated Cloud environment can potentially increase the rate of deadline violations.

Proceedings ArticleDOI
11 Apr 2012
TL;DR: A software platform for automotive gateway based on virtualization technology that runs two guest OSes concurrently on the same processor for running hard real-time applications, and the other connected to the outside internet for running soft or non-real- time applications.
Abstract: The automotive telematics gateway is an important part of modern high-end cars providing various safety and convenience services for the vehicle owner. We have developed a software platform for automotive gateway based on virtualization technology. It runs two guest OSes concurrently on the same processor: one connected to in-vehicle embedded networks for running hard real-time applications, and the other connected to the outside internet for running soft or non-real-time applications. Several representative applications have been implemented on this platform, including car taillight switch, virtual instrumentation panel, task migration in the pervasive computing environment, multimedia migration based on NFC, etc.

Proceedings ArticleDOI
11 Apr 2012
TL;DR: This paper extensively study the problem of skyline query on these interval based uncertain objects, which has never been studied before, and addresses two efficient algorithms with I/O optimal for the conventional intervals skyline queries and constrained interval skyline queries, respectively.
Abstract: Many recent applications involve processing and analyzing uncertain data. Recently, several research efforts have addressed answering skyline queries efficiently on massive uncertain datasets. However, the research lacks methods to compute these queries on uncertain data, where each dimension of the uncertain object is represented as an interval or an exact value. In this paper, we extensively study the problem of skyline query on these interval based uncertain objects, which has never been studied before. We first model the problem of querying the skylines on interval datasets. Typically, we address two efficient algorithms with I/O optimal for the conventional interval skyline queries and constrained interval skyline queries, respectively. Extensive experiments demonstrate the efficiency of all our proposed algorithms.

Proceedings ArticleDOI
11 Apr 2012
TL;DR: This paper presents how, due to this simplified model, a single scope nesting level can be used to check the legality of every reference assignment and shows that with simple hardware extensions a processor can see some improvement in terms of execution time for applications where cross-scope references are frequent.
Abstract: Memory management in Safety-Critical Java (SCJ) is based on time bounded, non garbage collected scoped memory regions used to store temporary objects. Scoped memory regions may have different life times during the execution of a program and hence, to avoid leaving dangling pointers, it is necessary to check that reference assignments are performed only from objects in shorter lived scopes to objects in longer lived scopes (or between objects in the same scoped memory area). SCJ offers, compared to the RTSJ, a simplified memory model where only the immortal and mission memory scoped areas are shared between threads and any other scoped region is thread private. In this paper we present how, due to this simplified model, a single scope nesting level can be used to check the legality of every reference assignment. We also show that with simple hardware extensions a processor can see some improvement in terms of execution time for applications where cross-scope references are frequent. Our proposal was implemented and tested on the Java Optimized Processor (JOP).

Proceedings ArticleDOI
11 Apr 2012
TL;DR: Compared to a manually programmed solution, the presented approach enables the service developer to apply and parameterize refactorings with a level of confidence that they will not produce an invalid or 'corrupt' transformation of messages.
Abstract: This paper presents the development of REF-WS an approach to enable a Web Service provider to reliably evolve their service through the application of refactoring transformations. REF-WS is intended to aid service providers, particularly in a reliability and performance constrained domain as it permits upgraded 'non-backwards compatible' services to be deployed into a performance constrained network where existing consumers depend on an older version of the service interface. In order for this to be successful, the refactoring and message mediation needs to occur without affecting functional compatibility with the services' consumers, and must operate within the performance overhead expected of the original service, introducing as little latency as possible. Furthermore, compared to a manually programmed solution, the presented approach enables the service developer to apply and parameterize refactorings with a level of confidence that they will not produce an invalid or 'corrupt' transformation of messages. This is achieved through the use of preconditions for the defined refactorings.

Proceedings ArticleDOI
11 Apr 2012
TL;DR: The results show that it is possible to validate QoS properties by automatically adapting system execution traces at analysis time instead of modifying the application's existing source code.
Abstract: System execution traces are useful artifacts for validating distributed system quality-of-service (QoS) properties, such as end-to-end response time, throughput, and service time. With proper planning during development phase of the software lifecycle, it is possible to ensure such traces contain required properties to facilitate analysis for QoS validation. In some case, however, it is not possible to ensure system execution traces contain the necessary properties for QoS analysis. This paper presents the System Execution Trace Adaptation Framework (SETAF) for adapting system execution traces to support analysis of QoS properties. It also presents results from applying SETAF to externally developed applications. The results show that it is possible to validate QoS properties by automatically adapting system execution traces at analysis time instead of modifying the application's existing source code.

Proceedings ArticleDOI
11 Apr 2012
TL;DR: A fault injection toolkit is developed that emulates a WAN within a LAN environment between composed service components and offers full control over the emulated environments in addition to the ability to inject network-related and application specific faults.
Abstract: Testing Web services performance and their Fault Tolerance Mechanisms (FTMs) are crucial for the development of today's applications. Testing the performance and FTMs of composed service systems is hard to measure at design time because service instability is often caused by the nature of the network. Using a real internet environment for testing is difficult to set up and control. We have developed a fault injection toolkit that emulates a WAN within a LAN environment between composed service components and offers full control over the emulated environments in addition to the ability to inject network-related and application specific faults. The tool also generates background workloads on the tested system for producing more realistic results. We describe an experiment that has been carried out to test the impact of fault tolerance protocols deployed at a service client by using our fault injection toolkit.