scispace - formally typeset
Search or ask a question

Showing papers on "Application software published in 2010"


Proceedings ArticleDOI
20 Apr 2010
TL;DR: CloudAnalyst is developed to simulate large-scale Cloud applications with the purpose of studying the behavior of such applications under various deployment configurations and helps developers with insights in how to distribute applications among Cloud infrastructures and value added services such as optimization of applications performance and providers incoming with the use of Service Brokers.
Abstract: Advances in Cloud computing opens up many new possibilities for Internet applications developers. Previously, a main concern of Internet applications developers was deployment and hosting of applications, because it required acquisition of a server with a fixed capacity able to handle the expected application peak demand and the installation and maintenance of the whole software infrastructure of the platform supporting the application. Furthermore, server was underutilized because peak traffic happens only at specific times. With the advent of the Cloud, deployment and hosting became cheaper and easier with the use of pay-peruse flexible elastic infrastructure services offered by Cloud providers. Because several Cloud providers are available, each one offering different pricing models and located in different geographic regions, a new concern of application developers is selecting providers and data center locations for applications. However, there is a lack of tools that enable developers to evaluate requirements of large-scale Cloud applications in terms of geographic distribution of both computing servers and user workloads. To fill this gap in tools for evaluation and modeling of Cloud environments and applications, we propose CloudAnalyst. It was developed to simulate large-scale Cloud applications with the purpose of studying the behavior of such applications under various deployment configurations. CloudAnalyst helps developers with insights in how to distribute applications among Cloud infrastructures and value added services such as optimization of applications performance and providers incoming with the use of Service Brokers.

612 citations


Proceedings ArticleDOI
07 Nov 2010
TL;DR: An overview of important software engineering research issues related to the development of applications that run on mobile devices, including development processes, tools, user interface design, application portability, quality, and security are provided.
Abstract: This paper provides an overview of important software engineering research issues related to the development of applications that run on mobile devices. Among the topics are development processes, tools, user interface design, application portability, quality, and security.

524 citations


Proceedings ArticleDOI
17 Feb 2010
TL;DR: The Hardware Locality (hwloc) software is introduced which gathers hardware information about processors, caches, memory nodes and more, and exposes it to applications and runtime systems in a abstracted and portable hierarchical manner.
Abstract: The increasing numbers of cores, shared caches and memory nodes within machines introduces a complex hardware topology. High-performance computing applications now have to carefully adapt their placement and behavior according to the underlying hierarchy of hardware resources and their software affinities. We introduce the Hardware Locality (hwloc) software which gathers hardware information about processors, caches, memory nodes and more, and exposes it to applications and runtime systems in a abstracted and portable hierarchical manner. hwloc may significantly help performance by having runtime systems place their tasks or adapt their communication strategies depending on hardware affinities. We show that hwloc can already be used by popular high-performance OpenMP or MPI software. Indeed, scheduling OpenMP threads according to their affinities or placing MPI processes according to their communication patterns shows interesting performance improvement thanks to hwloc. An optimized MPI communication strategy may also be dynamically chosen according to the location of the communicating processes in the machine and its hardware characteristics.

411 citations


Journal ArticleDOI
TL;DR: This paper presents the chaos report figures that are often used to indicate problems in application software development project management, the reports contain major flaws.
Abstract: This paper presents the chaos report figures that are often used to indicate problems in application software development project management, the reports contain major flaws.

296 citations


Proceedings ArticleDOI
24 May 2010
TL;DR: This paper proposes a platform that enables people to share their Web-enabled devices so that others can use them and illustrates how to rely on existing social networks and their open APIs to enable owners to leverage the social structures in place for sharing smart things with others.
Abstract: In the emerging “Web of Things”, digitally augmented everyday objects are seamlessly integrated to the Web by reusing Web patterns such as REST. This results in an ecosystem of real-world devices that can be reused and recombined to create new ad-hoc applications. This, however, implies that devices are available to the world. In this paper, we propose a platform that enables people to share their Web-enabled devices so that others can use them. We illustrate how to rely on existing social networks and their open APIs (e.g. OpenSocial) to enable owners to leverage the social structures in place for sharing smart things with others. We finally discuss some of the challenges we identified towards a composable Web of Things.

210 citations


Journal ArticleDOI
TL;DR: A model for showing how much each server peer consumes electric power to perform Web requests from client peers is discussed and algorithms for a client peer to select a server peer in a collection of server peers so that the total power consumption can be reduced while some constraint like deadline one is satisfied.
Abstract: Information systems based on the cloud computing model and peer-to-peer (P2P) model are now getting popular. In the cloud computing model, a cloud of servers support thin clients with various types of service like Web pages and databases. On the other hand, every computer is peer and there is no centralized coordinator in the P2P model. It is getting more significant to discuss how to reduce the total electric power consumption of computers in information systems to realize eco-society. In this paper, we consider a Web type of application on P2P overlay networks. First, we discuss a model for showing how much each server peer consumes electric power to perform Web requests from client peers. Then, we discuss algorithms for a client peer to select a server peer in a collection of server peers so that the total power consumption can be reduced while some constraint like deadline one is satisfied. Lastly, we evaluate the algorithms in terms of the total power consumption and throughput compared with traditional round robin algorithms.

191 citations


Proceedings ArticleDOI
09 May 2010
TL;DR: A novel PaaS architecture being developed in the EU IST IRMOS project targeting real-time Quality of Service (QoS) guarantees for online interactive multimedia applications is presented.
Abstract: Cloud computing offers the potential to dramatically reduce the cost of software services through the commoditization of information technology assets and on-demand usage patterns. However, the complexity of determining resource provision policies for applications in such complex environments introduces significant inefficiencies and has driven the emergence of a new class of infrastructure called Platform-as-a-Service (PaaS). In this paper, we present a novel PaaS architecture being developed in the EU IST IRMOS project targeting real-time Quality of Service (QoS) guarantees for online interactive multimedia applications. The architecture considers the full service lifecycle including service engineering, service level agreement design, provisioning and monitoring. QoS parameters at both application and infrastructure levels are given specific attention as the basis for provisioning policies in the context of temporal constraints. The generic applicability of the architecture is being verified and validated through implemented scenarios from three important application sectors (film post-production, virtual augmented reality for engineering design, collaborative e-Learning in virtual worlds).

149 citations


Journal ArticleDOI
12 Apr 2010
TL;DR: The Multithreaded Application Real-Time executor (MARTe), a framework built over a multiplatform library that allows the execution of the same code in different operating systems, is currently being used to successfully drive the plasma vertical stabilization controller on the largest magnetic confinement fusion device in the world.
Abstract: Development of real-time applications is usually associated with nonportable code targeted at specific real-time operating systems. The boundary between hardware drivers, system services, and user code is commonly not well defined, making the development in the target host significantly difficult. The Multithreaded Application Real-Time executor (MARTe) is a framework built over a multiplatform library that allows the execution of the same code in different operating systems. The framework provides the high-level interfaces with hardware, external configuration programs, and user interfaces, assuring at the same time hard real-time performances. End-users of the framework are required to define and implement algorithms inside a well-defined block of software, named Generic Application Module (GAM), that is executed by the real-time scheduler. Each GAM is reconfigurable with a set of predefined configuration meta-parameters and interchanges information using a set of data pipes that are provided as inputs and required as output. Using these connections, different GAMs can be chained either in series or parallel. GAMs can be developed and debugged in a non-real-time system and, only once the robustness of the code and correctness of the algorithm are verified, deployed to the real-time system. The software also supplies a large set of utilities that greatly ease the interaction and debugging of a running system. Among the most useful are a highly efficient real-time logger, HTTP introspection of real-time objects, and HTTP remote configuration. MARTe is currently being used to successfully drive the plasma vertical stabilization controller on the largest magnetic confinement fusion device in the world, with a control loop cycle of 50 ?s and a jitter under 1 ?s. In this particular project, MARTe is used with the Real-Time Application Interface (RTAI)/Linux operating system exploiting the new ?86 multicore processors technology.

136 citations


Patent
26 Aug 2010
TL;DR: A program guide system is provided in this paper that supports a program guide application and multiple non-guide applications to use both device resources and program guide resources, and the application interface maintains a list of registered applications and directs control requests from various applications to the current primary application.
Abstract: A program guide system is provided that supports a program guide application and multiple non-guide applications. The program guide system has a program guide application interface that allows the non-guide applications to use both device resources and program guide resources. The application interface maintains a list of registered applications and directs control requests from various applications to the current primary application. The application interface also has a user interface input director that directs keystrokes and other user input commands to the appropriate application. If a keystroke for the program guide application is detected while a non-guide application is running, the program guide application is invoked.

132 citations


Journal ArticleDOI
01 Apr 2010
TL;DR: Current state of CernVM project is presented and performance of CVMFS to performance of traditional network file system like AFS is compared and possible scenarios that could further improve its performance and scalability are discussed.
Abstract: CernVM is a Virtual Software Appliance capable of running physics applications from the LHC experiments at CERN. It aims to provide a complete and portable environment for developing and running LHC data analysis on any end-user computer (laptop, desktop) as well as on the Grid, independently of Operating System platforms (Linux, Windows, MacOS). The experiment application software and its specific dependencies are built independently from CernVM and delivered to the appliance just in time by means of a CernVM File System (CVMFS) specifically designed for efficient software distribution. The procedures for building, installing and validating software releases remains under the control and responsibility of each user community. We provide a mechanism to publish pre-built and configured experiment software releases to a central distribution point from where it finds its way to the running CernVM instances via the hierarchy of proxy servers or content delivery networks. In this paper, we present current state of CernVM project and compare performance of CVMFS to performance of traditional network file system like AFS and discuss possible scenarios that could further improve its performance and scalability.

129 citations


Journal ArticleDOI
TL;DR: Pin is a software system that performs runtime binary instrumentation of Linux and Microsoft Windows applications and aims to provide an instrumentation platform for building a wide variety of program analysis tools, called pintools.
Abstract: Software instrumentation provides the means to collect information on and efficiently analyze parallel programs. Using Pin, developers can build tools to detect and examine dynamic behavior including data races, memory system behavior, and parallelizable loops. Pin is a software system that performs runtime binary instrumentation of Linux and Microsoft Windows applications. Pin's aim is to provide an instrumentation platform for building a wide variety of program analysis tools, called pintools. By performing the instrumentation on the binary at runtime, Pin eliminates the need to modify or recompile the application's source and supports the instrumentation of programs that dynamically generate code.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: It is argued that adopting a best-effort service model for various software and hardware components of the computing platform stack can lead to drastic improvements in scalability and large improvements in performance and energy efficiency.
Abstract: With the advent of mainstream parallel computing, applications can obtain better performance only by scaling to platforms with larger numbers of cores. This is widely considered to be a very challenging problem due to the difficulty of parallel programming and the bottlenecks to efficient parallel execution. Inspired by how networking and storage systems have scaled to handle very large volumes of packet traffic and persistent data, we propose a new approach to the design of scalable, parallel computing platforms. For decades, computing platforms have gone to great lengths to ensure that every computation specified by applications is faithfully executed. While this design philosophy has remained largely unchanged, applications and the basic characteristics of their workloads have changed considerably. A wide range of existing and emerging computing workloads have an inherent forgiving nature. We therefore argue that adopting a best-effort service model for various software and hardware components of the computing platform stack can lead to drastic improvements in scalability. Applications are cognizant of the best-effort model, and separate their computations into those that may be executed on a best-effort basis and those that require the traditional execution guarantees. Best-effort computations may be exploited to simply reduce the computing workload, shape it to be more suitable for parallel execution, or execute it on unreliable hardware components. Guaranteed computations are realized either through an overlay software layer on top of the best-effort substrate, or through the use of application-specific strategies. We describe a system architecture for a best-effort computing platform, provide examples of parallel software and hardware that embody the best-effort model, and show that large improvements in performance and energy efficiency are possible through the adoption of this approach.

Proceedings ArticleDOI
20 May 2010
TL;DR: Scavenger as discussed by the authors is a new cyber foraging system supporting easy development of mobile cyber-foraging applications, while still delivering efficient, mobile use of remote computing resources through the use of a custom built mobile code execution environment and a new dual-profiling scheduler.
Abstract: Cyber foraging is a pervasive computing technique where small mobile devices offload resource intensive tasks to stronger computing machinery in the vicinity. This paper presents Scavenger—a new cyber foraging system supporting easy development of mobile cyber foraging applications, while still delivering efficient, mobile use of remote computing resources through the use of a custom built mobile code execution environment and a new dual-profiling scheduler. One of the main difficulties within cyber foraging is that it is very challenging for application programmers to develop cyber foraging enabled applications. An application using cyber foraging is working with mobile, distributed and, possibly, parallel computing; fields within computer science known to be hard for programmers to grasp. In this paper it is shown by example, how a highly distributed, parallel, cyber foraging enabled application can be developed using Scavenger. Benchmarks of the example application are presented showing that Scavenger imposes only minimal overhead when no surrogates are available, while greatly improving performance as surrogates become available.

Journal ArticleDOI
TL;DR: DUALLy is presented, an automated framework that allows architectural languages and tools interoperability given a number of architectural language and tools, they can all interoperate thanks to automated model transformation techniques.
Abstract: Many architectural languages have been proposed in the last 15 years, each one with the chief aim of becoming the ideal language for specifying software architectures. What is evident nowadays, instead, is that architectural languages are defined by stakeholder concerns. Capturing all such concerns within a single, narrowly focused notation is impossible. At the same time, it is also impractical to define and use a "universal" notation, such as UML. As a result, many domain-specific notations for architectural modeling have been proposed, each one focusing on a specific application domain, analysis type, or modeling environment. As a drawback, a proliferation of languages exists, each one with its own specific notation, tools, and domain specificity. No effective interoperability is possible to date. Therefore, if a software architect has to model a concern not supported by his own language/tool, he has to manually transform (and, eventually, keep aligned) the available architectural specification into the required language/tool. This paper presents DUALLy, an automated framework that allows architectural languages and tools interoperability. Given a number of architectural languages and tools, they can all interoperate thanks to automated model transformation techniques. DUALLy is implemented as an Eclipse plugin. Putting it in practice, we apply the DUALLy approach to the Darwin/FSP ADL and to a UML2.0 profile for software architectures. By making use of an industrial complex system, we transform a UML software architecture specification in Darwin/FSP, make some verifications by using LTSA, and reflect changes required by the verifications back to the UML specification.

Proceedings ArticleDOI
17 May 2010
TL;DR: The design and implementation of a SAGA-based Pilot-Job is described, which supports a wide range of application types, and is usable over a broad range of infrastructures, i.e., it is general-purpose and extensible, and as it will argue is also interoperable with Clouds.
Abstract: The uptake of distributed infrastructures by scientific applications has been limited by the availability of extensible, pervasive and simple-to-use abstractions which are required at multiple levels -- development, deployment and execution stages of scientific applications. The Pilot-Job abstraction has been shown to be an effective abstraction to address many requirements of scientific applications. Specifically, Pilot-Jobs support the decoupling of workload submission from resource assignment, this results in a flexible execution model, which in turn enables the distributed scale-out of applications on multiple and possibly heterogeneous resources. Most Pilot-Job implementations however, are tied to a specific infrastructure. In this paper, we describe the design and implementation of a SAGA-based Pilot-Job, which supports a wide range of application types, and is usable over a broad range of infrastructures, i.e., it is general-purpose and extensible, and as we will argue is also interoperable with Clouds. We discuss how the SAGA-based Pilot-Job is used for different application types and supports the concurrent usage across multiple heterogeneous distributed infrastructure, including concurrent usage across Clouds and traditional Grids/Clusters. Further, we show how Pilot-Jobs can help to support dynamic execution models and thus, introduce new opportunities for distributed applications. We also demonstrate for the first time that we are aware of, the use of multiple Pilot-Job implementations to solve the same problem, specifically, we use the SAGA-based Pilot-Job on high-end resources such as the TeraGrid and the native Condor Pilot-Job (Glide-in) on Condor resources. Importantly both are invoked via the same interface without changes at the development or deployment level, but only an execution (run-time) decision.

Proceedings ArticleDOI
24 Oct 2010
TL;DR: This work defines an Execution Engine that coordinates the execution of the application software so as to meet its timing constraints and shows that time-determinism is a sufficient condition for time-robustness.
Abstract: Correct and efficient implementation of general real-time applications remains by far an open problem. A key issue is meeting timing constraints whose satisfaction depends on features of the execution platform, in particular its speed. Existing rigorous implementation techniques are applicable to specific classes of systems e.g. with periodic tasks, time deterministic systems.We present a general model-based implementation method for real-time systems based on the use of two models. An abstract model representing the behavior of real-time software as a timed automaton. The latter describes user-defined platform-independent timing constraints. Its transitions are timeless and correspond to the execution of statements of the real-time software.A physical model representing the behavior of the real-time software running on a given platform. It is obtained by assigning execution times to the transitions of the abstract model.A necessary condition for implementability is time-safety, that is, any (timed) execution sequence of the physical model is also an execution sequence of the abstract model. Time-safety simply means that the platform is fast enough to meet the timing requirements. As execution times of actions are not known exactly, time-safety is checked for worst-case execution times of actions by making an assumption of time-robustness: time-safety is preserved when speed of the execution platform increases.We show that as a rule, physical models are not time-robust and show that time-determinism is a sufficient condition for time-robustness.For given real-time software and execution platform corresponding to a time-robust model, we define an Execution Engine that coordinates the execution of the application software so as to meet its timing constraints. Furthermore, in case of non-robustness, the Execution Engine can detect violations of time-safety and stop execution.We have implemented the Execution Engine for BIP programs with real-time constraints. We have validated the implementation method for an adaptive MPEG video encoder. Experimental results reveal the existence of timing anomalies seriously degrading performance for increasing platform execution speed.

Proceedings ArticleDOI
06 Apr 2010
TL;DR: This paper proposes a technique, in which a set of generic oracle comparators, template generators, and visualizations of test failure output are provided to deal with dynamic non-deterministic behavior in Ajax user interfaces.
Abstract: There is a growing trend to move desktop applications towards the web using advances made in web technologies such as Ajax. One common way to provide assurance about the correctness of such complex and evolving systems is through regression testing. Regression testing classical web applications has already been a notoriously daunting task because of the dynamism in web interfaces. Ajax applications pose an even greater challenge since the test case fragility degree is higher due to extensive run-time manipulation of the DOM tree and asynchronous client/server interactions. In this paper, we propose a technique, in which we automatically generate test cases and apply pipelined oracle comparators along with generated DOM templates, to deal with dynamic non-deterministic behavior in Ajax user interfaces. Our approach, implemented in Crawljax, is open source and provides a set of generic oracle comparators, template generators, and visualizations of test failure output. We describe two case studies evaluating the effectiveness, scalability, and required manual effort of the approach.

Journal ArticleDOI
TL;DR: This paper proposes a reliability and testing resources allocation model that is able to provide solutions at various levels of detail, depending upon the information the engineer has about the system, and aims to quantitatively identify the most critical components of software architecture in order to best assign the testing resources to them.
Abstract: With software systems increasingly being employed in critical contexts, assuring high reliability levels for large, complex systems can incur huge verification costs. Existing standards usually assign predefined risk levels to components in the design phase, to provide some guidelines for the verification. It is a rough-grained assignment that does not consider the costs and does not provide sufficient modeling basis to let engineers quantitatively optimize resources usage. Software reliability allocation models partially address such issues, but they usually make so many assumptions on the input parameters that their application is difficult in practice. In this paper, we try to reduce this gap, proposing a reliability and testing resources allocation model that is able to provide solutions at various levels of detail, depending upon the information the engineer has about the system. The model aims to quantitatively identify the most critical components of software architecture in order to best assign the testing resources to them. A tool for the solution of the model is also developed. The model is applied to an empirical case study, a program developed for the European Space Agency, to verify model's prediction abilities and evaluate the impact of the parameter estimation errors on the prediction accuracy.

Proceedings ArticleDOI
16 Apr 2010
TL;DR: A model of the mobile application from a black box view and a distance metric for the test cases of mobile software is proposed and an ART test case generation technique is proposed for mobile application.
Abstract: Mobile applications are becoming more and more powerful yet also more complex. While mobile application users expect the application to be reliable and secure, the complexity of the mobile application makes it prone to have faults. Mobile application engineers and testers use testing technique to ensure the quality of mobile application. However, the testing of mobile application is time-consuming and hard to automate. In this paper, we model the mobile application from a black box view and propose a distance metric for the test cases of mobile software. We further proposed an ART test case generation technique for mobile application. Our experiment shows our ART tool can both reduce the number of test cases and the time needed to expose first fault when compared with random technique.

Journal ArticleDOI
TL;DR: The mechanisms devised as part of this model fall into two categories: asynchronous event handling and synchronous exception handling, which enable designing recovery actions to handle different kinds of failure conditions arising in context-aware applications.
Abstract: In this paper, we present a forward recovery model for programming robust context-aware applications. The mechanisms devised as part of this model fall into two categories: asynchronous event handling and synchronous exception handling. These mechanisms enable designing recovery actions to handle different kinds of failure conditions arising in context-aware applications. These include service discovery failures, service binding failures, exceptions raised by a service, and context invalidations. This model is integrated in the high-level programming framework that we have designed for building context-aware collaborative (CSCW) applications. In this paper, we demonstrate the capabilities of this model for programming various kinds of recovery patterns in context-aware applications.

Proceedings ArticleDOI
17 May 2010
TL;DR: The preliminary results shows that dynamic bottleneck detection and resolution for multi-tier Web application hosted on the cloud will help to offer SLAs that can offer response time guarantees.
Abstract: Current service-level agreements (SLAs) offered by cloud providers do not make guarantees about response time of Web applications hosted on the cloud. Satisfying a maximum average response time guarantee for Web applications is difficult due to unpredictable traffic patterns. The complex nature of multi-tier Web applications increases the difficulty of identifying bottlenecks and resolving them automatically. It may be possible to minimize the probability that tiers (hosted on virtual machines) become bottlenecks by optimizing the placement of the virtual machines in a cloud. This research focuses on enabling clouds to offer multi-tier Web application owners maximum response time guarantees while minimizing resource utilization. We present our basic approach, preliminary experiments, and results on a EUCALYPTUS-based testbed cloud. Our preliminary results shows that dynamic bottleneck detection and resolution for multi-tier Web application hosted on the cloud will help to offer SLAs that can offer response time guarantees.

Journal ArticleDOI
TL;DR: Key R&D challenges facing developers of mobile cyber-physical applications that integrate with Internet services are presented and emerging solutions to address these challenges are summarized.
Abstract: The powerful processors and variety of sensors in new and planned mobile Internet devices, such as Apple’s iPhone and Android-based smartphones, can be leveraged to build cyber-physical applications that collect sensor data from the real world and communicate it back to Internet services for processing and aggregation. This article presents key R&D challenges facing developers of mobile cyber-physical applications that integrate with Internet services and summarizes emerging solutions to address these challenges. For example, application software should be architected to conserve power, which motivates R&D on tools that can predict the power consumption characteristics of mobile software architectures. Other R&D challenges involve the relative paucity of work on software and sensor data collection architectures that cater to the powerful capabilities and cyber-physical aspects of mobile Internet devices, which motivates R&D on architectures tailored to the latest mobile Internet devices.

Patent
19 Jan 2010
TL;DR: The control panels can consist of a variety of components which include user interface elements (such as sliders, buttons, and checkboxes), charts and maps as mentioned in this paper, and the underlying simulation is generated based on data sources within an application software program file, selected by the user during the control panel creation process.
Abstract: The invention relates to a method and tool which allows users to create interactive representations of input and output data, and simulate the associated algorithms used to manipulate this data, that are used in spreadsheet applications and other similar software programs. The interactive simulation is visually represented by a customizable set of components which hereinafter will be referred to as a control pane. The control panels can consist of a variety of components which include user interface elements (such as sliders, buttons, and checkboxes), charts and maps. The underlying simulation is generated based on data sources within an application software program file (e.g., spreadsheet data cells) selected by the user during the control panel creation process.

Patent
11 Nov 2010
TL;DR: In this article, computer-implemented methods for controlling a web-based application include providing, from a server, a web page that includes a web based application and a uniform resource identifier specifying a control interface for the webbased application and transmitting the control interface specified by the uniform resource identifiers to a client for use in controlling the one or more functions of the web based applications.
Abstract: Systems and techniques by which a single electronic device can implement a variety of customized control interfaces. The control interfaces can be tailored to specific operations performed the device which is controlled. In one aspect, computer-implemented methods for controlling a web-based application include providing, from a server, a web page that includes a web-based application and a uniform resource identifier specifying a control interface for the web-based application and transmitting the control interface specified by the uniform resource identifier to a client for use in controlling the one or more functions of the web-based application. The control interface is configured to permit a user to control one or more functions of the web-based application from a mobile device.

Proceedings ArticleDOI
06 Apr 2010
TL;DR: In this paper, a process of verifying FBDs by the NuSMV model checker is described and it reduces the state space dramatically so that realistic application components can be verified.
Abstract: The development of Programmable Logic Controllers (PLCs) in the last years has made it possible to apply them in ever more complex tasks. Many systems based on these controllers are safety-critical, the certification of which entails a great effort. Therefore, there is a big demand for tools for analyzing and verifying PLC applications. Among the PLC-specific languages proposed in the standard IEC 61131-3, FBD(Function Block Diagram) is a graphical one widely used in rail automation. In this paper, a process of verifying FBDs by the NuSMV model checker is described. It consists of three transformation steps: FBD!TextFBD!tFBD!NuSMV. the novel step introduced here is the second one: it reduces the state space dramatically so that realistic application components can be verified. The process has been developed and tested in the area of rail automation, in particular interlocking systems. As a part of the interlocking software, a typical point logic has been used as a test case.

Proceedings ArticleDOI
24 Oct 2010
TL;DR: This paper proposes a methodology for producing automatically efficient and correct-by-construction distributed implementations by starting from a high-level model of the application software in BIP, and transforms arbitrary BIP models into Send/Receive BIP model, directly implementable on distributed execution platforms.
Abstract: Although distributed systems are widely used nowadays, their implementation and deployment is still a time-consuming, error-prone, and hardly predictive task. In this paper, we propose a methodology for producing automatically efficient and correct-by-construction distributed implementations by starting from a high-level model of the application software in BIP. BIP (Behavior, Interaction, Priority) is a component-based framework with formal semantics that rely on multi-party interactions for synchronizing components. Our methodology transforms arbitrary BIP models into Send/Receive BIP models, directly implementable on distributed execution platforms. The transformation consists of (1) breaking atomicity of actions in atomic components by replacing strong synchronizations with asynchronous Send/Receive interactions; (2) inserting several distributed controllers that coordinate execution of interactions according to a user-defined partition, and (3) augmenting the model with a distributed algorithm for handling conflicts between controllers preserving observational equivalence to the initial models. Currently, it is possible to generate from Send/Receive models stand-alone C++ implementations using either TCP sockets for conventional communication, or MPI implementation, for deployment on multi-core platforms. This method is fully implemented. We report concrete results obtained under different scenarios.

Proceedings ArticleDOI
06 Apr 2010
TL;DR: The results show that mutation analysis can help create tests that are effective at finding web application faults, as well as indicating several directions for improvement.
Abstract: As our awareness of the complexities inherent in web applications grows, we find an increasing need for more sophisticated ways to test them. Many web application faults are a result of how web software components interact; sometimes client-server and sometimes server-server. This paper presents a novel solution to the problem of integration testing of web applications by using mutation analysis. New mutation operators are defined, a tool (webMuJava) that implements these operators is presented, and results from a case study applying the tool to test a small web application are presented. The results show that mutation analysis can help create tests that are effective at finding web application faults, as well as indicating several directions for improvement.

Journal ArticleDOI
TL;DR: The Context-Aware Browser for mobile devices senses the surrounding environment, infers the user's current context, and proactively searches for and activates relevant Web documents and applications.
Abstract: The typical scenario of a user seeking information on the Web requires significant effort to get the desired information. In a world where information is essential, it can be crucial for users to get the desired information quickly even when they are away from their desktop computers. The Context-Aware Browser for mobile devices senses the surrounding environment, infers the user's current context, and proactively searches for and activates relevant Web documents and applications.

Proceedings ArticleDOI
06 Apr 2010
TL;DR: A technique for testing RIAs that generates test cases from application execution traces, and obtains more scalable test suites thanks to testing reduction techniques is presented.
Abstract: The rapid and growing diffusion of Rich Internet Applications (RIAs) with their enhanced interactivity, responsiveness and dynamicity is sharpening the distance between Web applications and desktop applications, making the Web experience more and more appealing and user-friendly. This paper presents a technique for testing RIAs that generates test cases from application execution traces, and obtains more scalable test suites thanks to testing reduction techniques. Execution traces provide a fast and cheap way for generating test cases and can be obtained either from user sessions, or by crawling the application or by combining both approaches. The proposed technique has been evaluated by a preliminary experiment that investigated the effectiveness of different approaches for execution trace collection and of several criteria for reducing the test suites. The experimental results showed the feasibility of the technique and that its effectiveness can be improved by hybrid approaches that combine both manually and automatically obtained execution traces of the application.

Proceedings ArticleDOI
08 Mar 2010
TL;DR: The architecture uses a Sobel edge detector to achieve real-time (75 fps) performance, and is configurable in terms of various application parameters, making it suitable for a number of application environments.
Abstract: Stereoscopic 3D reconstruction is an important algorithm in the field of Computer Vision, with a variety of applications in embedded and real-time systems. Existing software-based implementations cannot satisfy the performance requirements for such constrained systems; hence an embedded hardware mechanism might be more suitable. In this paper, we present an architecture of a 3D reconstruction system for stereoscopic images, which we implement on Virtex2 Pro FPGA. The architecture uses a Sobel edge detector to achieve real-time (75 fps) performance, and is configurable in terms of various application parameters, making it suitable for a number of application environments. The paper also presents a design exploration on algorithmic parameters such as disparity range, correlation window size, and input image size, illustrating the impact on the performance for each parameter.