scispace - formally typeset
Search or ask a question

Showing papers on "Application software published in 2007"


Proceedings ArticleDOI
23 May 2007
TL;DR: The paper briefly reviews widely used optimization techniques and the key ingredients required for their successful application to software engineering, providing an overview of existing results in eight software engineering application domains.
Abstract: This paper describes work on the application of optimization techniques in software engineering. These optimization techniques come from the operations research and metaheuristic computation research communities. The paper briefly reviews widely used optimization techniques and the key ingredients required for their successful application to software engineering, providing an overview of existing results in eight software engineering application domains. The paper also describes the benefits that are likely to accrue from the growing body of work in this area and provides a set of open problems, challenges and areas for future work.

667 citations


Journal ArticleDOI
TL;DR: This paper investigates the relationship between Web 2.0 and SOAs and their respective applications from both a technological and business perspective.
Abstract: Recently, the relationship between Web 2.0 and service-oriented architectures (SOAs) has received an enormous amount of coverage because of the notion of complexity-hiding and reuse, along with the concept of loosely coupling services. Some argue that Web 2.0 and SOAs have significantly different elements and thus cannot be regarded as parallel philosophies. Others, however, consider the two concepts as complementary and regard Web 2.0 as the global SOA. This paper investigate these two philosophies and their respective applications from both a technological and business perspective

254 citations


Journal ArticleDOI
TL;DR: An overview of the existing research in this area, critically examine its limitations, and suggest ways to address the identified limitations can be found in this article, where the authors provide an overview of their work.
Abstract: With the growing size and complexity of software applications, research in the area of architecture-based software reliability analysis has gained prominence. The purpose of this paper is to provide an overview of the existing research in this area, critically examine its limitations, and suggest ways to address the identified limitations

190 citations


Journal ArticleDOI
TL;DR: In the study, a similarity degree based algorithm is proposed to aggregate the objective information about ERP systems from some external professional organizations, which may be expressed by different linguistic term sets.

186 citations


Proceedings ArticleDOI
24 Jul 2007
TL;DR: A research project to develop a 2D-barcode processing solution to support mobile applications is reported and the application examples, and case study using the solution are presented.
Abstract: With the swift increase of the number of mobile device users, more wireless information services and mobile commerce applications are needed Since various barcodes have been used for decades as a very effective means in many traditional commerce systems, today people are looking for innovative solutions to use barcodes in the wireless world Recently, the mobile industry began to pay more attention to barcode applications in m-commerce because 2D-barcodes not only provide a simple and inexpensive method to present diverse commerce data, but also improve mobile user experience by reducing their inputs This paper first discusses 2D-barcode concepts, types and classifications, major technology players, and applications in mobile commerce Then, it reports a research project to develop a 2D-barcode processing solution to support mobile applications Moreover, the paper also presents the application examples, and case study using the solution

186 citations


Journal ArticleDOI
TL;DR: A system where a cluster of PCs, equipped with accelerated graphics cards managed by the Chromium software, is able to handle remote visualization sessions based on MPEG video streaming involving complex 3D models is proposed.
Abstract: Mobile devices such as personal digital assistants, tablet PCs, and cellular phones have greatly enhanced user capability to connect to remote resources. Although a large set of applications is now available bridging the gap between desktop and mobile devices, visualization of complex 3D models is still a task hard to accomplish without specialized hardware. This paper proposes a system where a cluster of PCs, equipped with accelerated graphics cards managed by the Chromium software, is able to handle remote visualization sessions based on MPEG video streaming involving complex 3D models. The proposed framework allows mobile devices such as smart phones, personal digital assistants (PDAs), and tablet PCs to visualize objects consisting of millions of textured polygons and voxels at a frame rate of 30 fps or more depending on hardware resources at the server side and on multimedia capabilities at the client side. The server is able to concurrently manage multiple clients computing a video stream for each one; resolution and quality of each stream is tailored according to screen resolution and bandwidth of the client. The paper investigates in depth issues related to latency time, bit rate and quality of the generated stream, screen resolutions, as well as frames per second displayed

176 citations


Proceedings ArticleDOI
24 Jul 2007
TL;DR: Salome is a platform whose goal is to carry out studies while providing a way to integrate different codes into the same numerical framework, and the programming model offered by the platform to integrate numerical codes is described.
Abstract: Numerical simulation is more and more widely used for the design of new products thanks to the increasing cost of testing grounds and the improvement of technologies. As example, it allows to determine the evolution of an equipment (for example a nuclear reactor) during its lifespan. To be able to achieve realistic results, numerical simulation codes need to rely on new computer related technologies (both in software and hardware areas) to face their important challenges. Salome is a platform whose goal is to carry out these studies while providing a way to integrate different codes into the same numerical framework. This paper describes the programming model offered by the platform to integrate numerical codes.

174 citations


Proceedings ArticleDOI
24 May 2007
TL;DR: Nine lessons learned from five representative projects are presented, along with their software engineering implications, to provide insight into the software development environments in this domain.
Abstract: The need for high performance computing applications for computational science and engineering projects is growing rapidly, yet there have been few detailed studies of the software engineering process used for these applications. The DARPA High Productivity Computing Systems Program has sponsored a series of case studies of representative computational science and engineering projects to identify the steps involved in developing such applications (i.e. the life cycle, the workflows, technical challenges, and organizational challenges). Secondary goals were to characterize tool usage and identify enhancements that would increase the programmers' productivity. Finally, these studies were designed to develop a set of lessons learned that can be transferred to the general computational science and engineering community to improve the software engineering process used for their applications. Nine lessons learned from five representative projects are presented, along with their software engineering implications, to provide insight into the software development environments in this domain.

161 citations


Journal ArticleDOI
TL;DR: The approach to tackle the API-evolution problem in the context of reuse-based software development is discussed, which automatically recognizes the API changes of the reused framework and proposes plausible replacements to the "obsolete" API based on working examples of the framework code base.
Abstract: Applications built on reusable component frameworks are subject to two independent, and potentially conflicting, evolution processes. The application evolves in response to the specific requirements and desired qualities of the application's stakeholders. On the other hand, the evolution of the component framework is driven by the need to improve the framework functionality and quality while maintaining its generality. Thus, changes to the component framework frequently change its API on which its client applications rely and, as a result, these applications break. To date, there has been some work aimed at supporting the migration of client applications to newer versions of their underlying frameworks, but it usually requires that the framework developers do additional work for that purpose or that the application developers use the same tools as the framework developers. In this paper, we discuss our approach to tackle the API-evolution problem in the context of reuse-based software development, which automatically recognizes the API changes of the reused framework and proposes plausible replacements to the "obsolete" API based on working examples of the framework code base. This approach has been implemented in the Diff-CatchUp tool. We report on two case studies that we have conducted to evaluate the effectiveness of our approach with its Diff-CatchUp prototype.

158 citations


Patent
Gordon J. Freedman1
07 Jan 2007
TL;DR: In this article, a first software component is configured to cause retrieval and storage of structured data for a first data class from a first store of the structured data on a device, such as a first Data Processing System (DPS).
Abstract: Synchronization architectures, methods, systems, and computer readable media are described. One exemplary embodiment includes a first software component which is configured to cause retrieval and storage of structured data for a first data class from a first store of the structured data on a device, such as a first data processing system, and is configured to synchronize structured data for the first data class in the first store with structured data of the first data class in a second store on a host, such as a second data processing system. The first software component is separate from an application software which provides a user interface to allow a user to access and edit the structured data. The first software component synchronizes the structured data through a second software component which interfaces with the host and the device and which controls an order of synchronizing and a plurality of data classes including the first data class.

156 citations


Proceedings ArticleDOI
24 Jul 2007
TL;DR: The construction and design of a static analysis framework (called SAFELI) for identifying SIA vulnerabilities at compile time is proposed, which has the future potential to discover more delicate SQL injection attacks than black-box Web security inspection tools.
Abstract: Recently SQL injection attack (SIA) has become a major threat to Web applications. Via carefully crafted user input, attackers can expose or manipulate the back-end database of a Web application. This paper proposes the construction and outlines the design of a static analysis framework (called SAFELI) for identifying SIA vulnerabilities at compile time. SAFELI statically inspects MSIL bytecode of an ASP.NET Web application, using symbolic execution. At each hotspot that submits SQL query, a hybrid constraint solver is used to find out the corresponding user input that could lead to breach of information security. Once completed, SAFELI has the future potential to discover more delicate SQL injection attacks than black-box Web security inspection tools.

Proceedings ArticleDOI
23 May 2007
TL;DR: Future developments in Web applications will be driven by advances in browser technology, Web Internet infrastructure, protocol standards, software engineering methods, and application trends.
Abstract: A Web application is an application that is invoked with a Web browser over the Internet. Ever since 1994 when the Internet became available to the public and especially in 1995 when the World Wide Web put a usable face on the Internet, the Internet has become a platform of choice for a large number of ever-more sophisticated and innovative Web applications. In just one decade, the Web has evolved from being a repository of pages used primarily for accessing static, mostly scientific, information to a powerful platform for application development and deployment. New Web technologies, languages, and methodologies make it possible to create dynamic applications that represent a new model of cooperation and collaboration among large numbers of users. Web application development has been quick to adopt software engineering techniques of component orientation and standard components. For example, search, syndication, and tagging have become standard components of a new generation of collaborative applications and processes. Future developments in Web applications will be driven by advances in browser technology, Web Internet infrastructure, protocol standards, software engineering methods, and application trends.

Proceedings ArticleDOI
23 May 2007
TL;DR: The view on next generation middleware is introduced, considering both technological advances in the networking area but also the need for closer integration with software engineering best practices, to ultimately suggest middleware-based software processes.
Abstract: Middleware is a software layer that stands between the networked operating system and the application and provides well known reusable solutions to frequently encountered problems like heterogeneity, interoperability, security, dependability. Further, with networks becoming increasingly pervasive, middleware appears as a major building block for the development of future software systems. Starting with the impact of pervasive networking on computing models, manifested by now common grid and ubiquitous computing, this paper surveys related challenges for the middleware and related impact on the software development. Indeed, future applications will need to cope with advanced non-functional properties such as context- awareness and mobility, for which adequate middleware support must be devised together with accompanying software development notations, methods and tools. This leads us to introduce our view on next generation middleware, considering both technological advances in the networking area but also the need for closer integration with software engineering best practices, to ultimately suggest middleware-based software processes.

Proceedings ArticleDOI
Yuan Chen1, Subu Iyer1, Xue Liu1, Dejan Milojicic1, Akhil Sahai1 
11 Jun 2007
TL;DR: This paper presents an approach that combines performance modeling with performance profiling to create models that translate SLOs to lower-level resource requirements for each system involved in providing the service, eliminating the involvement of domain experts.
Abstract: In today's complex and highly dynamic computing environments, systems/services have to be constantly adjusted to meet service level agreements (SLAs) and to improve resource utilization, thus reducing operating cost. Traditional design of such systems usually involves domain experts who implicitly translate service level objectives (SLOs) specified in SLAs to system-level thresholds in an ad-hoc manner. In this paper, we present an approach that combines performance modeling with performance profiling to create models that translate SLOs to lower-level resource requirements for each system involved in providing the service. Using these models, the process of creating an efficient design of a system/service can be automated, eliminating the involvement of domain experts. We demonstrate that our approach is practical and that it can be applied to different applications and software architectures. Our experiments show that for a typical 3-tier e-commerce application in a virtualized environment the SLAs can be met while improving CPU utilization up to 3 times.

Proceedings ArticleDOI
25 Apr 2007
TL;DR: The design principles, implementation, and evaluation of the RETOS operating system which is specifically developed for micro sensor nodes, and the networking architecture in RETOS is designed with a layering concept to provide WSN-specific network abstraction.
Abstract: This paper presents the design principles, implementation, and evaluation of the RETOS operating system which is specifically developed for micro sensor nodes. RETOS has four distinct objectives, which are to provide (1) a multithreaded programming interface, (2) system resiliency, (3) kernel extensibility with dynamic reconfiguration, and (4) WSN-oriented network abstraction. RETOS is a multithreaded operating system, hence it provides the commonly used thread model of programming interface to developers. We have used various implementation techniques to optimize the performance and resource usage of multithreading. RETOS also provides software solutions to separate kernel from user applications, and supports their robust execution on MMU-less hardware. The RETOS kernel can be dynamically reconfigured, via loadable kernel framework, so a application- optimized and resource-efficient kernel is constructed. Finally, the networking architecture in RETOS is designed with a layering concept to provide WSN-specific network abstraction. RETOS currently supports Atmel ATmegal28, TI MSP430, and Chipcon CC2430 family of microcontrollers. Several real-world WSN applications are developed for RETOS and the overall evaluation of the systems is described in the paper.

Proceedings ArticleDOI
25 Apr 2007
TL;DR: Worldsens is presented, an integrated environment for development and rapid prototyping of wireless sensor network applications that relies on software simulation to help the designer during the whole development process.
Abstract: In this paper we present Worldsens, an integrated environment for development and rapid prototyping of wireless sensor network applications. Our environment relies on software simulation to help the designer during the whole development process. The refinement is done starting from the high level design choices down to the target code implementation, debug and performance analysis. In the early stages of the design, high level parameters, like for example the node sleep and activity periods, can be tuned using WS- Net, an event driven wireless network simulator. WSNet uses models for applications, protocols and radio medium communication with a parameterized accuracy. The second step of the sensor network application design takes place after the hardware implementation choices. This second step relies on the WSim cycle accurate hardware platform simulator. WSim is used to debug the application using the real target binary code. Precise performance evaluation, including real-time analysis at the interrupt level, are made possible at this low simulation level. WSim can be connected to WSNet, in place of the application and protocol models used during the high level simulation to achieve a full distributed application simulation. WSNet and WSNet+WSim allow a continuous refinement from high level estimations down to low level real-time validation. We illustrate the complete application design process using a real life demonstrator that implements a hello protocol for dynamic neighborhood discovery in a wireless sensor network environment.

Journal ArticleDOI
TL;DR: The authors present a qualitative and quantitative analysis of state-of-the art replication and caching techniques used to host Web applications and propose a technique for Web practitioners to compare different mechanisms' performance on their own.
Abstract: Developers often use replication and caching mechanisms to enhance Web application performance. The authors present a qualitative and quantitative analysis of state-of-the art replication and caching techniques used to host Web applications. Their analysis shows that selecting the best mechanism depends heavily on data workload and requires a careful review of the application's characteristics. They also propose a technique for Web practitioners to compare different mechanisms' performance on their own

Proceedings ArticleDOI
09 Jul 2007
TL;DR: The author proposes an analytical model to study the competition between the SaaS and the traditional COTS (commercial off-the-shelf) solutions for software applications and shows that when software applications become open, modulated, and standardized, the SAAS business model will take a significant market share.
Abstract: The emergence of the software-as-a-service (SaaS) business model has attracted great attentions from both researchers and practitioners. SaaS vendors deliver on-demand information processing services to users, and thus offer computing utility rather than the standalone software itself. In this work, the author propose an analytical model to study the competition between the SaaS and the traditional COTS (commercial off-the-shelf) solutions for software applications. The author show that when software applications become open, modulated, and standardized, the SaaS business model will take a significant market share. In addition, under certain market conditions, offering users an easy exit option through the software contract will help to increase the SaaS vendors' competitive ability.

Patent
24 Oct 2007
TL;DR: In this paper, a network for updating firmware, drivers, or application software facilitates the access to generated update packages by electronic devices and the update of firmware, software, and application software in a fault tolerant mode.
Abstract: A network for updating firmware, drivers, or application software facilitates the access to generated update packages by electronic devices and the update of firmware, drivers, content or application software in a fault tolerant mode. A “Bubbles” technique is employed to generate efficient and compact update packages. A “Bubbles” information is generated employing the “Bubbles” technique that is subsequently sent to the electronic devices as part of an update package. The “Bubbles” information and other related information is used in preprocessing activities and in other update related activities. For example, they are used to prepare the electronic device for an update to a different version of its firmware, software and/or content.

Proceedings ArticleDOI
23 Apr 2007
TL;DR: A new platform for reconfigurable computing has an object-based programming model, with architecture, silicon and tools designed to faithfully realize this model, aimed at application developers using software languages and methodologies.
Abstract: A new platform for reconfigurable computing has an object-based programming model, with architecture, silicon and tools designed to faithfully realize this model. The platform is aimed at application developers using software languages and methodologies. Its objectives are massive performance, long-term scalability, and easy development. In our structural object programming model, objects are strictly encapsulated software programs running concurrently on an asynchronous array of processors and memories. They exchange data and control through a structure of self-synchronizing asynchronous channels. Objects are combined hierarchically to create new objects, connected through the common channel interface. The first chip is a 130nm ASIC with 360 32-bit processors, 360 1KB RAM banks with access engines, and a configurable word-wide channel interconnect. Applications written in Java and block diagrams compile in one minute. Sub-millisecond runtime reconfiguration is inherent.

Proceedings ArticleDOI
17 Jun 2007
TL;DR: In this paper, a hierarchical motion history histogram (HMHH) feature is proposed to represent the motion information and a low-dimensional feature vector is extracted from motion history images to be used in SVM classifiers.
Abstract: In this paper, we propose a human action recognition system suitable for embedded computer vision applications in security systems, human-computer interaction and intelligent environments. Our system is suitable for embedded computer vision application based on three reasons. Firstly, the system was based on a linear support vector machine (SVM) classifier where classification progress can be implemented easily and quickly in embedded hardware. Secondly, we use compacted motion features easily obtained from videos. We address the limitations of the well known motion history image (MHI) and propose a new hierarchical motion history histogram (HMHH) feature to represent the motion information. HMHH not only provides rich motion information, but also remains computationally inexpensive. Finally, we combine MHI and HMHH together and extract a low dimension feature vector to be used in the SVM classifiers. Experimental results show that our system achieves significant improvement on the recognition performance.

Proceedings ArticleDOI
23 May 2007
TL;DR: Directions for design research are outlined, including: (a) drawing lessons, inspiration, and techniques from design fields outside of computer science, (b) emphasizing the design of application "character" (functionality and style) as well as the application's structure, and (c) expanding the notion of software to encompass the designof additional kinds of intangible complex artifacts.
Abstract: The design of software has been a focus of software engineering research since the field's beginning. This paper explores key aspects of this research focus and shows why design will remain a principal focus. The intrinsic elements of software design, both process and product, are discussed: concept formation, use of experience, and means for representation, reasoning, and directing the design activity. Design is presented as being an activity engaged by a wide range of stakeholders, acting throughout most of a system's lifecycle, making a set of key choices which constitute the application's architecture. Directions for design research are outlined, including: (a) drawing lessons, inspiration, and techniques from design fields outside of computer science, (b) emphasizing the design of application "character" (functionality and style) as well as the application's structure, and (c) expanding the notion of software to encompass the design of additional kinds of intangible complex artifacts.

Proceedings ArticleDOI
31 Oct 2007
TL;DR: This paper uses the Abstract Data Views (ADV) design model which allows to express at a high level of abstraction the structure and behaviors of the user interface and allows to create complex interfaces as oblivious compositions of simple interface atoms.
Abstract: In this paper we present a novel approach for designing the interface of rich internet applications. Our approach uses the Abstract Data Views (ADV) design model which allows to express at a high level of abstraction the structure and behaviors of the user interface. Additionally, by using advanced techniques for separation of concerns it allows to create complex interfaces as oblivious compositions of simple interface atoms. Using a simple illustrative example we present the rationale of our approach, its core stages and how it is integrated into the Object Oriented Hypermedia Design Method (OOHDM). Some implementation issues are finally analyzed.

Proceedings ArticleDOI
03 Jan 2007
TL;DR: This paper identifies a taxonomy of software security assurance tools and defines one type of tool: Web application scanner, i.e., an automated program that examines Web applications for security vulnerabilities.
Abstract: There are many commercial software security assurance tools that claim to detect and prevent vulnerabilities in application software. However, a closer look at the tools often leaves one wondering which tools find what vulnerabilities. This paper identifies a taxonomy of software security assurance tools and defines one type of tool: Web application scanner, i.e., an automated program that examines Web applications for security vulnerabilities. We describe the types of functions that are generally found in a Web application scanner and how to test it

Journal ArticleDOI
TL;DR: This paper first presents a brief review on the traditional fault diagnosis method with an emphasis on its application to electric motors as important components on the all-electric ship, and the software agent technology is introduced.
Abstract: Fault diagnosis and prognosis are important tools for the reliability, availability, and survivability of navy all-electric ships (AES). Extending the fault detection and diagnosis into predictive maintenance increases the value of this technology. The traditional diagnosis can be viewed as a single diagnostic agent having a model of the component or the whole system to be diagnosed. This becomes inadequate when the components or system become large, complex, and even distributed as on navy electric ships. For such systems, the software multiagents may offer a solution. A key benefit of software agents is their ability to automatically perform complex tasks in place of human operators. After briefly reviewing traditional fault diagnosis and software agent technologies, this paper discusses how these technologies can be used to support the drastic manning reduction requirements for future navy ships. Examples are given on the existing naval applications and research on detection, diagnostic, and prognostic software agents. Current work on a multiagent system for shipboard power systems is presented as an example of system-level application.

Journal ArticleDOI
TL;DR: This work sets-up and model the phospholipid network in the phagosome and genome-scale metabolic maps of S.aureus, S.epidermidis and S.saprophyticus as well as test their robustness against enzyme impairment.
Abstract: Modeling of metabolic networks includes tasks such as network assembly, network overview, calculation of metabolic fluxes and testing the robustness of the network. YANAsquare provides a software framework for rapid network assembly (flexible pathway browser with local or remote operation mode), network overview (visualization routine and YANAsquare editor) and network performance analysis (calculation of flux modes as well as target and robustness tests). YANAsquare comes as an easy-to-setup program package in Java. It is fully compatible and integrates the programs YANA (translation of gene expression values into flux distributions, metabolite network dissection) and Metatool (elementary mode calculation). As application examples we set-up and model the phospholipid network in the phagosome and genome-scale metabolic maps of S.aureus, S.epidermidis and S.saprophyticus as well as test their robustness against enzyme impairment. YANAsquare is an application software for rapid setup, visualization and analysis of small, larger and genome-scale metabolic networks.

Proceedings ArticleDOI
10 Nov 2007
TL;DR: A technology to observe kernel actions and make this information available to application-level performance measurement tools is described and the benefits of merged application and OS performance information and its use in parallel performance analysis are demonstrated.
Abstract: The performance of a parallel application on a scalable HPC system is determined by user-level execution of the application code and system-level (OS kernel) operations. To understand the influences of system-level factors on application performance, the measurement of OS kernel activities is key. We describe a technology to observe kernel actions and make this information available to application-level performance measurement tools. The benefits of merged application and OS performance information and its use in parallel performance analysis are demonstrated, both for profiling and tracing methodologies. In particular, we focus on the problem of kernel noise assessment as a stress test of the approach. We show new results for characterizing noise and introduce new techniques for evaluating noise interference and its effects on application execution. Our kernel measurement and noise analysis technologies are being developed as part of Linux OS environments for scalable parallel systems.

Proceedings ArticleDOI
15 Oct 2007
TL;DR: A graph based approach for modelling the effects of both attacks against computer networks and response measures as reactions against the attacks, designed for a scalable granularity in representing properties of the network and its components to be protected.
Abstract: This contribution presents a graph based approach for modelling the effects of both attacks against computer networks and response measures as reactions against the attacks. Certain properties of the model graphs are utilized to quantify different response metrics which are well-kown from the pragmatic view of network security officers. Using these metrics, it is possible to (1) quantify practically relevant properties of a response measure after its application, and (2) estimate these properties for all available response measures prior to their application. The latter case is the basis for the selection of an appropriate reaction to a given attack. Our graph-based model is similar to those used in software reliability analysis and was designed for a scalable granularity in representing properties of the network and its components to be protected. Different examples show the applicability of the model and the resulting metric values.

Proceedings ArticleDOI
Simon Moser1, Axel Martens1, Katharina Görlach1, Wolfram Amme2, A. Godlinski2 
09 Jul 2007
TL;DR: This paper presents a method to extract dataflow information by constructing a CSSA representation and detecting data dependencies that effect communication behavior that are used to construct a more precise formal model of the given BPEL process and hence to improve the quality of analysis results.
Abstract: The Business Process Execution Language for Web Services WS-BPEL provides an technology to aggregate encapsulated functionalities for defining high-value Web services. For a distributed application in a B2B interaction, the partners simply need to expose their provided functionality as BPEL processes and compose them. Verifying such distributed web service based systems has been a huge topic in the research community lately - cf. [4] for a good overview. However, in most of the work on analyzing properties of interacting Web Services, especially when backed by stateful implementations like WS-BPEL, the data flow present in the implementation is widely neglected, and the analysis focusses on control flow only. This might lead to false-positive analysis results when searching for design weaknesses and errors, e. g. analyzing the controllability [14] of a given BPEL process. In this paper, we present a method to extract dataflow information by constructing a CSSA representation and detecting data dependencies that effect communication behavior. Those discovered dependencies are used to construct a more precise formal model of the given BPEL process and hence to improve the quality of analysis results.

Proceedings ArticleDOI
07 May 2007
TL;DR: The architectural additions and the many trade-offs in the design of a run-time library for NoC reconfiguration are shown and the performance, memory requirements, predictability and reusability of the different implementations are evaluated.
Abstract: Systems on chip (SoC) are becoming increasingly complex, with a large number of applications integrated on the same chip. Such a system often supports a large number of use-cases and is dynamically reconfigured when platform conditions or user requirements change. Networks on chip (NoC) offer the designer unsurpassed runtime flexibility. This flexibility stems from the programmability of the individual routers and network interfaces. When a change in use-case occurs, the application task graph and the network connections change. To mitigate the complexity in programming the many registers controlling the NoC, an abstraction in the form of a configuration library is needed. In addition, such a library must leave the modified system in a consistent state, from which normal operation can continue. In this paper we present the facilities for controlling change in a reconfigurable NoC. We show the architectural additions and the many trade-offs in the design of a run-time library for NoC reconfiguration. We qualitatively and quantitatively evaluate the performance, memory requirements, predictability and reusability of the different implementations