scispace - formally typeset
Search or ask a question

Showing papers in "IET Software in 2007"


Journal ArticleDOI
TL;DR: An assertion language, called assertion language for BPEL process interactions (ALBERT), is introduced; it can be used to specify both functional and non-functional properties and can be turned into checks that a software monitor performs on the composite system to verify that it continues to guarantee its required properties.
Abstract: Web services support software architectures that can evolve dynamically. In particular, in this paper the focus is on architectures where services are composed (orchestrated) through a workflow described in the business process execution language (BPEL). It is assumed that the resulting composite service refers to external services through assertions that specify their expected functional and non-functional properties. On the basis of these assertions, the composite service may be verified at design time by checking that it ensures certain relevant properties. Because of the dynamic nature of web services and the multiple stakeholders involved in their provision, however, the external services may evolve dynamically, and even unexpectedly. They may become inconsistent with respect to the assertions against which the workflow was verified during development. As a consequence, validation of the composition must extend to run time. In this work, an assertion language, called assertion language for BPEL process interactions (ALBERT), is introduced; it can be used to specify both functional and non-functional properties. An environment which supports design-time verification of ALBERT assertions for BPEL workflows via model checking is also described. At run time, the assertions can be turned into checks that a software monitor performs on the composite system to verify that it continues to guarantee its required properties. A TeleAssistance application is provided as a running example to illustrate our validation framework.

107 citations


Journal ArticleDOI
TL;DR: The concerns for arranging non-intrusive monitoring of embedded systems in a way that is suitable for use in runtime verification methods and the potential for runtime verification to utilise such monitoring approaches are considered.
Abstract: Ensuring the correctness of software applications is a difficult task. The area of runtime verification, which combines the approaches of formal verification and testing, offers a practical but limited solution that can help in finding many errors in software. Runtime verification relies upon tools for monitoring software execution. There are particular difficulties with regard to monitoring embedded systems. The concerns for arranging non-intrusive monitoring of embedded systems in a way that is suitable for use in runtime verification methods are considered here. A number of existing runtime verification tools are referenced, highlighting their requirement for monitoring solutions. Established and emerging approaches for the monitoring of software execution using execution monitors are reviewed, with an emphasis on the approaches that are best suited for use with embedded systems. A suggested solution for non-intrusive monitoring of embedded systems is presented. The conclusions summarise the possibilities for arranging non-intrusive monitoring of embedded systems, and the potential for runtime verification to utilise such monitoring approaches.

68 citations


Journal ArticleDOI
TL;DR: This work presents an indicative literature survey of techniques proposed for different phases of the CBD life cycle, to help provide a better understanding of different CBD techniques for each of these areas.
Abstract: Because of the extensive uses of components, the Component-Based Software Engineering (CBSE) process is quite different from that of the traditional waterfall approach. CBSE not only requires focus on system specification and development, but also requires additional consideration for overall system context, individual components properties and component acquisition and integration process. The term component-based software development (CBD) can be referred to as the process for building a system using components. CBD life cycle consists of a set of phases, namely, identifying and selecting components based on stakeholder requirements, integrating and assembling the selected components and updating the system as components evolve over time with newer versions. This work presents an indicative literature survey of techniques proposed for different phases of the CBD life cycle. The aim of this survey is to help provide a better understanding of different CBD techniques for each of these areas.

62 citations


Journal ArticleDOI
TL;DR: A UML 2.0 profile for WebML which allows WebML models to be used in conjunction with other notations and modelling tools has been described and some key requirements for making this version of the standard more usable are identified.
Abstract: In recent years, we have witnessed how the Web Engineering community has started using the standard unified modelling language (UML) notation, techniques and supporting tools for modelling Web systems, which has led to the adaptation to UML of several existing modelling languages, notations and development processes. This interest for being MOF and UML-compliant arises from the increasing need to interoperate with other notations and tools, and to exchange data and models, thus facilitating reuse. WebML, like any other domain-specific language, allows one to express in a precise and natural way the concepts and mechanisms of its domain of reference. However, it cannot fully interoperate with other notations, nor be integrated with other model-based tools. As a solution to these requirements, a UML 2.0 profile for WebML which allows WebML models to be used in conjunction with other notations and modelling tools has been described. The paper also evaluates UML 2.0 as a platform for Web modelling and identifies some key requirements for making this version of the standard more usable.

49 citations


Journal ArticleDOI
TL;DR: The suggestion that in software development projects the emphasis must be on the project management (RE), requirements engineering, and design activities, and consequently efforts in production activities should be minimised and performed as automatically as possible is discussed.
Abstract: The suggestion that in software development projects the emphasis must be on the project management (RE), requirements engineering, and design activities, and consequently efforts in production activities - such as traditional software programming and testing - should be minimised and performed as automatically as possible is discussed. The Project IT approach that integrates contributions from the RE and model-driven engineering communities is also discussed. The goal with requirement specification is not just in managing textual specifications, but also to obtain a consistent requirements document that is in conformance with a domain- specific language, and that can be re-used to increase the design and development activities in the context of model driven and code generation techniques. Furthermore, the feasibility and benefits of this approach by presenting a proof-of-concept case study are discussed, in which the orchestration of the concepts and concrete components related with the Project IT approach, the PIT-RSL, XIS and PIT-TSL languages and the Project lT-Studio CASE tool is emphasised. A practical demonstration of the approach including the description of the system requirements, the design of the system, the use of code generation techniques, and how they integrate to improve and accelerate the software engineering lifecycle is presented.

43 citations


Journal ArticleDOI
TL;DR: Whether it is necessary to base IT industry and academic policy on expert opinion rather than on empirical evidence is assessed and quasi-experimental designs developed and used in the social sciences are proposed to improve the methodology for undertaking large-scale empirical studies in software engineering.
Abstract: A recent report on the state of the UK information technology (IT) industry based most of its findings and recommendations on expert opinion. It is surprising that the report was unable to incorporate more empirical evidence. This paper aims to assess whether it is necessary to base IT industry and academic policy on expert opinion rather than on empirical evidence. Current evidence related to the rate of project failure is identified and the methods used to accumulate that evidence discussed. This shows that the report failed to identify relevant evidence and most evidence related to project failure is based on convenience samples. The status of empirical research in the computing disciplines is reviewed showing that empirical evidence covers a restricted range of subjects and seldom addresses the 'Society' level of analysis. Other more robust designs that would address large-scale IT questions are discussed. We recommend adopting a more systematic approach to accumulating and reporting evidence. In addition, we propose using quasi-experimental designs developed and used in the social sciences to improve the methodology used for undertaking large-scale empirical studies in software engineering.

41 citations


Journal ArticleDOI
TL;DR: This work presents an overview of the development process of the Unified Modelling Language (UML)-based Web engineering (UWE) defined as an MDE approach, and focuses on the model transformation aspects of the UWE process.
Abstract: Software development techniques are continuously evolving with the goal of solving the main problems that still affect the building and maintenance of software systems: time, costs and error-proneness. Model-driven engineering (MDE) approaches aim to reduce at least some of these problems providing techniques for the construction of models and the specification of transformation rules, tool support and automatic generation of code and documentation. The method of resolution of MDE is to first build models, which are independent of the platform, transforming them in later stages to technological-dependent models, and to achieve automatic model and code generation based on transformation rules. Web engineering is a domain where model-driven approaches can be used to address evolution and adaptation of Web software to continuously emerging new platforms and changes in technologies. We present an overview of the development process of the Unified Modelling Language (UML)-based Web engineering (UWE) defined as an MDE approach. The main characteristic of UWE is the use of standards including the UML, XML metadata interchange as model exchange, meta-object facility for metamodelling, model-driven architecture and the transformation language query-view-transformation. We focus on the model transformation aspects of the UWE process.

37 citations


Journal ArticleDOI
TL;DR: An empirical study of a web-based system development is carried out to examine AOP against OOP approach with regard to software development efficiency and design quality, and reveals that the AOP approach appears to be a fullfledged alternative to the pure Oop approach.
Abstract: The aspect-oriented programming (AOP) approach is supposed to enhance a system's features such as modularity, readability and simplicity. Owing to a better modularisation of crosscutting concerns, the developed system implementation would be less complex, and more readable. Thus, software development efficiency would increase, so the system would be created faster than its object-oriented programming (OOP) equivalent. An empirical study of a web-based system development is carried out to examine AOP against OOP approach with regard to software development efficiency and design quality. The study reveals that the AOP approach appears to be a fullfledged alternative to the pure OOP approach. Nevertheless, the impact of AOP on software development efficiency and design quality was not confirmed. In particular, it appeared that design quality metrics were not significantly associated with using AOP, instead of OOP. It is possible that the benefits of AOP will exceed the results obtained in the present study for experiments with larger number of subjects.

30 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the designed model satisfies the required properties by resorting to a symbolic model checker, Uppaal, for real-time systems, and the effectiveness of the approach presented here is demonstrated.
Abstract: Validation is an important task in complex embedded system designs. A method of modelling and analysing embedded systems with programmable logic controllers is presented. Controllers and physical plants are modelled using timed automata. The system requirements are specified and formalised as computational tree logic properties. It is demonstrated that the designed model satisfies the required properties by resorting to a symbolic model checker, Uppaal, for real-time systems. A realistic example, the steeve controller of a theatre, illustrates the strategies. The safety and time constraint requirements are validated by Uppaal. The experimental results demonstrate the effectiveness of the approach presented here.

21 citations


Journal ArticleDOI
TL;DR: A semi-automatic approach is provided that allows generating MOF-based meta-models on the basis of DTDs that represents an initial step towards a transition to employ MDE techniques within the WebML design methodology.
Abstract: Meta-models are a prerequisite for model-driven engineering (MDE) in general and consequently for model-driven web engineering in particular. Various web modelling languages, however, are not based on meta-models and standards, like object management group's prominent meta object facility (MOF). Instead they define proprietary languages rather focused on notational aspects. Thus, MDE techniques and tools cannot be deployed for such languages preventing to exploit the full potential of MDE in terms of standardised storage, exchange and transformation of models. The WebML web modelling language is one example that does not yet rely on an explicit meta-model in the sense of MDE. Instead, it is defined in terms of a document type definition (DTD), and implicitly within the accompanying tool. Code generation then has to rely on model-to-code transformations based of extensible stylesheet language transformations (XSLT). We propose a meta-model for WebML to bridge WebML to MDE. To establish such a meta-model, instead of remodelling WebML's meta-model from scratch, a semi-automatic approach is provided that allows generating MOF-based meta-models on the basis of DTDs. The meta-model for WebML accomplishes the following aims: first, it represents an initial step towards a transition to employ MDE techniques within the WebML design methodology. Second, the provision of a MOF-based meta-model ensures interoperability with other MDE tools. Third, it represents an important step towards a common meta-model for Web modelling in future.

19 citations


Journal ArticleDOI
TL;DR: A runtime configurable fault management mechanism (FMM) is proposed, which detects deviations from given service specifications by intercepting interface calls and picks a repair action that incurs the best tradeoff between the success rate and the cost of repair.
Abstract: The Trust4All project aims to define an open, component-based framework for the middleware layer in high-volume embedded appliances that enables robust and reliable operation, upgrading and extension. To improve the availability of each individual application in a Trust4All system, a runtime configurable fault management mechanism (FMM) is proposed, which detects deviations from given service specifications by intercepting interface calls. When repair is necessary, FMM picks a repair action that incurs the best tradeoff between the success rate and the cost of repair. Considering that it is rather difficult to obtain sufficient information about third party components during their early stage of usage, FMM is designed to be able to accumulate knowledge and adapts its capability accordingly.

Journal ArticleDOI
TL;DR: The final intention is to empower game developers and designers to work more productively, with a higher level of abstraction and closer to their application domain.
Abstract: An environment targeted at computer games development industrialisation in the .NET platform is presented. A computer game product line definition and its architecture are specified and implemented by means of software factory assets, such as a visual designer based on a domain- specific language, semantic validators and code generators. The proposed approach is then illustrated and empirically validated by the creation of real world case studies. Finally, it is investigated how the proposed factory can be used as an edutainment platform for Computer Science 1 and 2 courses. The final intention is to empower game developers and designers to work more productively, with a higher level of abstraction and closer to their application domain.

Journal ArticleDOI
TL;DR: A three-layer workflow model for designing a workflow is presented and characterises the behaviour of an artifact by its state transition diagram and six types of inaccurate artifact usage affecting workflow execution are identified.
Abstract: Although many workflow models have been proposed, analyses on artifacts are seldom discussed. A workflow application with well structured and adequate resources may still fail or yield unexpected results in execution due to inaccurate artifact manipulation, for example, inconsistency between data flow and control flow, or contradictions between artifact operations. Thus, artifact analysis is very important since activities cannot be executed properly without accurate information. This paper presents a three-layer workflow model for designing a workflow and characterises the behaviour of an artifact by its state transition diagram. By abstracting common usages of artifacts, six types of inaccurate artifact usage affecting workflow execution are identified and a set of algorithms to detect these inaccurate usages in workflow specifications is presented. An example is demonstrated and then related works are compared.

Journal ArticleDOI
TL;DR: The potentiality of jLab is demonstrated by describing the implementation of a Support Vector Machine toolkit and by comparing its performance with a C/C++ and a Matlab version and across different computing platforms (i.e. Linux, Sun/Solaris and Windows XP).
Abstract: jLab environment provides a Matlab/Scilab like scripting language that is executed by an interpreter, implemented in the Java language. This language supports all the basic programming constructs and an extensive set of built in mathematical routines that cover all the basic numerical analysis tasks. Moreover, the toolboxes of jLab can be easily implemented in Java and the corresponding classes can be dynamically integrated to the system. The efficiency of the Java compiled code can be directly utilised for any computationally intensive operations. Since jLab is coded in pure Java, the build from source process is much cleaner, faster, platform independent and less error prone than the similar C/C++/Fortran-based open source environments (e.g. Scilab and Octave). Neuro-Fuzzy algorithms can require enormous computation resources and at the same time an expressive programming environment. The potentiality of jLab is demonstrated by describing the implementation of a Support Vector Machine toolkit and by comparing its performance with a C/C++ and a Matlab version and across different computing platforms (i.e. Linux, Sun/Solaris and Windows XP).

Journal ArticleDOI
TL;DR: A lightweight mapping between Fundamental Business Process Modelling Language (FBPML) and the Web Services Ontology (OWL-S) is outlined, which implies that evolving Semantic Web technologies are not adequate for all service modelling needs and could thus benefit from the more traditional and mature BPM methods.
Abstract: Bridging the gap between enterprise modelling methods and Semantic Web services is an important yet challenging task For organisations with business goals, the automation of business processes as Web services is increasingly important, especially with many business transactions taking place within the Web today Taking one approach to address this problem, a lightweight mapping between Fundamental Business Process Modelling Language (FBPML) and the Web Services Ontology (OWL-S) is outlined The framework entails a data model translation and a process model translation via the use of ontologies and mapping principles Several working examples of the process model translations are presented together with the implementation of an automated translator FBPML constructs and process models that could not be translated to OWL-S equivalents highlight the differences between the languages of the two domains It also implies that evolving Semantic Web technologies, in particular OWL-S, are not adequate for all service modelling needs and could thus benefit from the more traditional and mature BPM methods On a more interesting note, this is effectively the first step towards enabling a semantic-based business workflow system

Journal ArticleDOI
TL;DR: This work proposes a formal approach for composing partial system behaviours wherepartial system behaviours are defined as finite state automata, each automaton represents a use-case that describes a certain system concern, hence the name use- case automaton (UCA).
Abstract: Modelling the behaviour of a system under development has proved to be a very effective way to ensure that it will be constructed correctly. However, building up this model is a difficult task that requires a significant time investment and a high level of expertise. Consequently, incremental approaches that construct a system model from partial behavioural descriptions have been widely adopted. The challenge in such approaches lies in finding both the adequate behavioural formalism that fits the needs of the analyst and a formal composition mechanism that facilitates the generation of the expected behavioural model and produces a verifiable model. Within this framework, use-case approaches have been also accepted in industry because they make the process of requirements elicitation simpler. Their main shortcoming is their lack of formalisation, which makes validation difficult. A formal approach for composing partial system behaviours where partial system behaviours are defined as finite state automata is proposed. Each automaton represents a use-case that describes a certain system concern, hence the name use-case automaton (UCA). The composition of different UCAs could be performed with respect to a set of states or transitions specified by the analyst, using certain composition operators. Each of these operators has a precise semantics, which is defined by how the composition is performed. The formalization of use-case composition is based on label matching between the UCAs to be composed. Our approach is fully automated and provides the advantage of generating a UCA that meets the intended behaviour without unexpected scenarios. Finally, the UMACT, which implements our composition approach, is presented.

Journal ArticleDOI
TL;DR: This review process is based on reviewers' efforts to produce high-quality software while minimising the inspection cost and has been successfully implemented in a CMM level 3 software development company intending to achieve CMMI level 5 and results are found to be quite encouraging.
Abstract: A considerable amount of software is produced world-wide by small and medium enterprises (SMEs). These organisations do not have enough resources to implement a rigorous quality plan. It has been established that reviews of various artifacts play a very important role in ensuring the quality of software. Traditional review methods are rigorous and their implementation is cumbersome for SMEs. A new review process which is easy to implement and requires almost no documentation is introduced. It is based on reviewers' efforts to produce high-quality software while minimising the inspection cost. Additionally, people who are conducting this review need not be present at the same place during most phases of the review process. This process has been successfully implemented in a CMM level 3 software development company intending to achieve CMMI level 5 and results are found to be quite encouraging.

Journal ArticleDOI
TL;DR: The major conclusion of this work is that exceptions are not being correctly used as an error-handling mechanism and these results contribute to the assessment of the effectiveness of the unchecked exceptions approach.
Abstract: The emergence of exception handling (EH) mechanisms in modern programming languages made available a different way of communicating errors between procedures For years, programmers trusted in correct documentation of error codes returned by procedures to correctly handle erroneous situations Now, they have to focus on the documentation of exceptions for the same effect But to what extent can exception documentation be trusted? Moreover, is there enough documentation for exceptions? And in what way do these questions relate to the discussion on checked against unchecked exceptions? For a given set of Microsoft NET applications, code and documentation were thoroughly parsed and compared This showed that exception documentation tends to be scarce In particular, it showed that 90% of exceptions are undocumented Furthermore, programmers were demonstrated to be keener to document exceptions they explicitly throw while typically leaving exceptions resulting from method calls undocumented This conclusion lead to another question: how do programmers use the EH mechanisms available in modern programming languages? More than 16 different NET applications were examined in order to provide an answer The major conclusion of this work is that exceptions are not being correctly used as an error-handling mechanism These results contribute to the assessment of the effectiveness of the unchecked exceptions approach

Journal ArticleDOI
TL;DR: Evaluation results suggest that practical requirement models do contain overlapping use cases, and a detection approach using sequence diagrams and statecharts is proposed that is effective in detecting them.
Abstract: To deal with the complexity of large information systems, the divide-and-conquer policy is usually adopted to capture requirements from a large number of stakeholders: obtain requirements from different stakeholders, respectively, and then put them together to form a full requirement specification. One of the problems induced by the policy is overlapping requirements. A use case driven approach could not avoid overlapping requirements either: it produces overlapping use cases, which are even more harmful, because a use case describes not only inputs and outputs as traditional requirements do, but also the scenarios. Each of the overlapping use cases provides a message sequence to implement the common subgoal. Overlapping use cases not only decrease the maintainability of the requirement specification, but also result in a complicated, confusing and expensive system. To be worse, it is difficult to detect overlapping use cases with existing methods for requirement management. To find out overlapping use cases, a detection approach using sequence diagrams and statecharts is proposed. Evaluation results suggest that practical requirement models do contain overlapping use cases, and the proposed approach is effective in detecting them.

Journal ArticleDOI
TL;DR: A generic component-based modifiability approach is proposed here, and used to build a highly-modifiable middleware framework that provides design support for building component frameworks (CFs), that is reusable and extensible component architectures that are targeted at specific domains.
Abstract: Because of the increasingly diverse and dynamic environments in which they must operate, modern middleware platforms need to explicitly support ‘modifiability’. Modifiability should encompass change that is both static and dynamic, small scale and large scale. Also, the process of modification should be flexible, easy to perform and consistency-preserving. To address these needs, a generic component-based modifiability approach is proposed here, and used to build a highly-modifiable middleware framework. The modifiability approach provides design support for building component frameworks (CFs), that is reusable and extensible component architectures that are targeted at specific domains. In the approach, CFs build upon a minimal, technology-independent component model and can be recursively assembled into more complex CFs. The middleware framework – an instantiation of the proposed approach – takes the form of a specific assembly of CFs, each of which addresses a distinct middleware-related concern. This middleware framework supports two styles of modification: First, ‘architectural modification’ enables large-scale, static changes, such as customising the framework to a new application domain or underlying infrastructure. Second, ‘system modification’ enables changes that are based on specific customisations of the framework; these changes are smaller in scope (e.g. replacing protocol implementations) but are applicable at both deploy-time and run-time. A prototype implementation demonstrates the feasibility of the approach and framework presented and demonstrates a sufficient degree of supported modifiability.

Journal ArticleDOI
TL;DR: A case study is presented applying the Petri-net formal method to a contemporary research area: IEEE 802.11 centralised control mechanisms to support delay-sensitive streams and bursty data traffic to demonstrate the potential that the petri-nets formal method has for analysing process and protocol models to support reconfigurable devices.
Abstract: Full or partial reconfiguration of communications devices offers both optimised performance for niche scenario-specific deployments and support for de-regulated radio spectrum management. The correctness of the protocols or protocol-enhancements being deployed in such a dynamic and autonomous manner cannot easily be determined through traditional testing techniques. Formal description techniques are a key verification technique for protocols. The Petri-net formal description technique offers the best combination of intuitive representation, tool-support and analytical capabilities. Having described key features and analytical approaches of Reference-nets (an extended Petri-net formalism), a case study is presented applying this approach to a contemporary research area: IEEE 802.11 centralised control mechanisms to support delay-sensitive streams and bursty data traffic. This case study showcases the ability both to generate performance-oriented simulation results and to determine more formal correctness properties. The simulation results allow comparison with published results and show that a packet-expiration mechanism places greater demands on the contention-free resource allocation, while the mathematical analysis of the model reveals it to be free of deadlock and k-bounded with respect to resources. The work demonstrates the potential that the Petri-net formal method has for analysing process and protocol models to support reconfigurable devices.

Journal ArticleDOI
TL;DR: This work investigates the basic infrastructure required for building a robust and user-friendly AOP tool for .NET comparable with the Java-based AspectJ, and assesses the different classes of weavers that can be built on top of the CLR today and investigates what extensions to the platform would be needed in order to enable more sophisticated weaving technologies.
Abstract: Aspect-oriented programming (AOP), now practically a consolidated academic discipline, has yet to build more solid industrial foundations, especially in the realms of the .NET platform. It's believed that this situation is caused by the lack of a robust and user-friendly AOP tool for .NET comparable with the Java-based AspectJ. This work investigates the basic infrastructure required for building such a tool: aspect-oriented weaving with the common language runtime (CLR) environment. In this regard, a classification schema is built, analysing the attributes a hypothetical aspect weaver for .NET might have. It assesses the different classes of weavers that can be built on top of the CLR today and investigates what extensions to the platform would be needed in order to enable more sophisticated weaving technologies. Some typical use cases for the resulting AOP tools, and classify what attributes a corresponding weaver would need to have in order to fulfil these requirements. Finally, two existing aspect weaver implementations in terms of these very same attributes are analysed.

Journal ArticleDOI
TL;DR: This study describes the design, architecture, implementation and performance measurements of a DGC algorithm for .NET that is complete, capable of reclaiming both acyclic and cyclic garbage, while being portable in the sense that it neither requires the underlying virtual machine to be modified, nor source or byte-code modi- fication.
Abstract: The memory management of distributed objects, when done manually, is an error-prone task. It leads to memory leaks and dangling references, causing applications to fail. Avoiding such errors requires automatic memory management, called distributed garbage collection (DGC). Current DGC solutions are either not safe, not complete or not portable to widely used platforms such as .NET. As a matter of fact, most solutions either run on specialised environments or require modifications of the underlying virtual machine (e.g. rotor, common language runtime (CLR)), hin- dering its immediate and widespread utilisation. This study describes the design, architecture, implementation and performance measurements of a DGC algorithm for .NET that: (i) is complete, that is, capable of reclaiming both acyclic and cyclic garbage, while (ii) being portable in the sense that it neither requires the underlying virtual machine to be modified, nor source or byte-code modi- fication. The distributed garbage collector was implemented on top of two implementations of the common language infrastructure (.NET virtual machine specification): CLR and shared source CLI, commonly known as Rotor. The implementation requires no modification of the environment, it makes use of the provided aspect-oriented functionalities, and the performance results are encouraging.


Journal ArticleDOI
TL;DR: In this paper, a systematic way is provided to synthesise an access control mechanism, which not only guarantees all specifications to be obeyed, but also allows each user to attain maximum permissive behaviours.
Abstract: Security in component-based software applications is studied by looking at information leakage from one component to another through operation calls. Components and security specifications about confidentiality as regular languages are modelled. Then a systematic way is provided to synthesise an access control mechanism, which not only guarantees all specifications to be obeyed, but also allows each user to attain maximum permissive behaviours.